00:00:00.001 Started by upstream project "autotest-nightly" build number 4349 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3712 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.142 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.234 Fetching changes from the remote Git repository 00:00:00.238 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.298 Using shallow fetch with depth 1 00:00:00.298 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.298 > git --version # timeout=10 00:00:00.346 > git --version # 'git version 2.39.2' 00:00:00.346 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.385 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.385 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.464 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.477 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.492 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.492 > git config core.sparsecheckout # timeout=10 00:00:08.507 > git read-tree -mu HEAD # timeout=10 00:00:08.528 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.556 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.556 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.665 [Pipeline] Start of Pipeline 00:00:08.680 [Pipeline] library 00:00:08.681 Loading library shm_lib@master 00:00:08.682 Library shm_lib@master is cached. Copying from home. 00:00:08.696 [Pipeline] node 00:00:08.706 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.708 [Pipeline] { 00:00:08.722 [Pipeline] catchError 00:00:08.725 [Pipeline] { 00:00:08.739 [Pipeline] wrap 00:00:08.748 [Pipeline] { 00:00:08.756 [Pipeline] stage 00:00:08.758 [Pipeline] { (Prologue) 00:00:09.063 [Pipeline] sh 00:00:09.357 + logger -p user.info -t JENKINS-CI 00:00:09.382 [Pipeline] echo 00:00:09.384 Node: CYP13 00:00:09.458 [Pipeline] sh 00:00:09.765 [Pipeline] setCustomBuildProperty 00:00:09.776 [Pipeline] echo 00:00:09.777 Cleanup processes 00:00:09.782 [Pipeline] sh 00:00:10.072 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.072 1203044 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.087 [Pipeline] sh 00:00:10.378 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.378 ++ grep -v 'sudo pgrep' 00:00:10.378 ++ awk '{print $1}' 00:00:10.378 + sudo kill -9 00:00:10.378 + true 00:00:10.394 [Pipeline] cleanWs 00:00:10.405 [WS-CLEANUP] Deleting project workspace... 00:00:10.405 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.414 [WS-CLEANUP] done 00:00:10.418 [Pipeline] setCustomBuildProperty 00:00:10.432 [Pipeline] sh 00:00:10.719 + sudo git config --global --replace-all safe.directory '*' 00:00:10.818 [Pipeline] httpRequest 00:00:11.215 [Pipeline] echo 00:00:11.217 Sorcerer 10.211.164.101 is alive 00:00:11.228 [Pipeline] retry 00:00:11.230 [Pipeline] { 00:00:11.246 [Pipeline] httpRequest 00:00:11.252 HttpMethod: GET 00:00:11.252 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.253 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.268 Response Code: HTTP/1.1 200 OK 00:00:11.268 Success: Status code 200 is in the accepted range: 200,404 00:00:11.269 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.456 [Pipeline] } 00:00:12.475 [Pipeline] // retry 00:00:12.483 [Pipeline] sh 00:00:12.775 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.860 [Pipeline] httpRequest 00:00:13.493 [Pipeline] echo 00:00:13.495 Sorcerer 10.211.164.101 is alive 00:00:13.505 [Pipeline] retry 00:00:13.508 [Pipeline] { 00:00:13.525 [Pipeline] httpRequest 00:00:13.530 HttpMethod: GET 00:00:13.531 URL: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:13.532 Sending request to url: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:13.550 Response Code: HTTP/1.1 200 OK 00:00:13.550 Success: Status code 200 is in the accepted range: 200,404 00:00:13.550 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:47.298 [Pipeline] } 00:00:47.315 [Pipeline] // retry 00:00:47.322 [Pipeline] sh 00:00:47.612 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:50.926 [Pipeline] sh 00:00:51.217 + git -C spdk log --oneline -n5 00:00:51.217 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:51.217 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:51.217 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:51.217 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:51.217 60adca7e1 lib/mlx5: API to configure UMR 00:00:51.228 [Pipeline] } 00:00:51.243 [Pipeline] // stage 00:00:51.252 [Pipeline] stage 00:00:51.256 [Pipeline] { (Prepare) 00:00:51.302 [Pipeline] writeFile 00:00:51.326 [Pipeline] sh 00:00:51.611 + logger -p user.info -t JENKINS-CI 00:00:51.628 [Pipeline] sh 00:00:51.913 + logger -p user.info -t JENKINS-CI 00:00:51.926 [Pipeline] sh 00:00:52.214 + cat autorun-spdk.conf 00:00:52.214 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.214 SPDK_TEST_NVMF=1 00:00:52.214 SPDK_TEST_NVME_CLI=1 00:00:52.214 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.214 SPDK_TEST_NVMF_NICS=e810 00:00:52.214 SPDK_RUN_ASAN=1 00:00:52.214 SPDK_RUN_UBSAN=1 00:00:52.214 NET_TYPE=phy 00:00:52.223 RUN_NIGHTLY=1 00:00:52.228 [Pipeline] readFile 00:00:52.255 [Pipeline] withEnv 00:00:52.258 [Pipeline] { 00:00:52.274 [Pipeline] sh 00:00:52.595 + set -ex 00:00:52.595 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.595 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.595 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.595 ++ SPDK_TEST_NVMF=1 00:00:52.595 ++ SPDK_TEST_NVME_CLI=1 00:00:52.595 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.595 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.595 ++ SPDK_RUN_ASAN=1 00:00:52.595 ++ SPDK_RUN_UBSAN=1 00:00:52.595 ++ NET_TYPE=phy 00:00:52.595 ++ RUN_NIGHTLY=1 00:00:52.595 + case $SPDK_TEST_NVMF_NICS in 00:00:52.595 + DRIVERS=ice 00:00:52.595 + [[ tcp == \r\d\m\a ]] 00:00:52.595 + [[ -n ice ]] 00:00:52.595 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.595 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:00.788 rmmod: ERROR: Module irdma is not currently loaded 00:01:00.788 rmmod: ERROR: Module i40iw is not currently loaded 00:01:00.788 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:00.788 + true 00:01:00.788 + for D in $DRIVERS 00:01:00.788 + sudo modprobe ice 00:01:01.049 + exit 0 00:01:01.060 [Pipeline] } 00:01:01.074 [Pipeline] // withEnv 00:01:01.079 [Pipeline] } 00:01:01.092 [Pipeline] // stage 00:01:01.127 [Pipeline] catchError 00:01:01.129 [Pipeline] { 00:01:01.148 [Pipeline] timeout 00:01:01.148 Timeout set to expire in 1 hr 0 min 00:01:01.150 [Pipeline] { 00:01:01.166 [Pipeline] stage 00:01:01.169 [Pipeline] { (Tests) 00:01:01.183 [Pipeline] sh 00:01:01.487 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.487 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.487 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.487 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:01.487 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.487 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.487 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:01.487 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.487 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.487 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.487 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:01.487 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.487 + source /etc/os-release 00:01:01.487 ++ NAME='Fedora Linux' 00:01:01.487 ++ VERSION='39 (Cloud Edition)' 00:01:01.487 ++ ID=fedora 00:01:01.487 ++ VERSION_ID=39 00:01:01.487 ++ VERSION_CODENAME= 00:01:01.487 ++ PLATFORM_ID=platform:f39 00:01:01.487 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:01.487 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.487 ++ LOGO=fedora-logo-icon 00:01:01.487 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:01.487 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.487 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:01.487 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.487 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.487 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.487 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:01.487 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.487 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:01.487 ++ SUPPORT_END=2024-11-12 00:01:01.487 ++ VARIANT='Cloud Edition' 00:01:01.487 ++ VARIANT_ID=cloud 00:01:01.487 + uname -a 00:01:01.487 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:01.487 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:04.788 Hugepages 00:01:04.788 node hugesize free / total 00:01:04.788 node0 1048576kB 0 / 0 00:01:04.788 node0 2048kB 1024 / 1024 00:01:04.788 node1 1048576kB 0 / 0 00:01:04.788 node1 2048kB 1024 / 1024 00:01:04.788 00:01:04.788 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.788 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:04.788 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:04.788 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:04.788 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:04.788 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:04.788 + rm -f /tmp/spdk-ld-path 00:01:04.788 + source autorun-spdk.conf 00:01:04.788 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.788 ++ SPDK_TEST_NVMF=1 00:01:04.788 ++ SPDK_TEST_NVME_CLI=1 00:01:04.788 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.788 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.788 ++ SPDK_RUN_ASAN=1 00:01:04.788 ++ SPDK_RUN_UBSAN=1 00:01:04.788 ++ NET_TYPE=phy 00:01:04.788 ++ RUN_NIGHTLY=1 00:01:04.788 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.788 + [[ -n '' ]] 00:01:04.788 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.788 + for M in /var/spdk/build-*-manifest.txt 00:01:04.788 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:04.788 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.788 + for M in /var/spdk/build-*-manifest.txt 00:01:04.788 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.788 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.788 + for M in /var/spdk/build-*-manifest.txt 00:01:04.788 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.788 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.788 ++ uname 00:01:04.788 + [[ Linux == \L\i\n\u\x ]] 00:01:04.788 + sudo dmesg -T 00:01:04.788 + sudo dmesg --clear 00:01:04.788 + dmesg_pid=1204608 00:01:04.788 + [[ Fedora Linux == FreeBSD ]] 00:01:04.788 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.788 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.788 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.788 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.788 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.788 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.788 + sudo dmesg -Tw 00:01:04.788 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.788 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.788 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.788 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.788 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.788 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.788 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.788 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.788 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.050 04:54:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:05.050 04:54:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:05.050 04:54:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:05.051 04:54:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:05.051 04:54:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.051 04:54:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:05.051 04:54:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:05.051 04:54:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:05.051 04:54:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:05.051 04:54:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:05.051 04:54:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:05.051 04:54:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.051 04:54:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.051 04:54:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.051 04:54:18 -- paths/export.sh@5 -- $ export PATH 00:01:05.051 04:54:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.051 04:54:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:05.051 04:54:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:05.051 04:54:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733716458.XXXXXX 00:01:05.051 04:54:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733716458.eWEQI9 00:01:05.051 04:54:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:05.051 04:54:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:05.051 04:54:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:05.051 04:54:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:05.051 04:54:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:05.051 04:54:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:05.051 04:54:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:05.051 04:54:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.051 04:54:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:05.051 04:54:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:05.051 04:54:18 -- pm/common@17 -- $ local monitor 00:01:05.051 04:54:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.051 04:54:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.051 04:54:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.051 04:54:18 -- pm/common@21 -- $ date +%s 00:01:05.051 04:54:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.051 04:54:18 -- pm/common@25 -- $ sleep 1 00:01:05.051 04:54:18 -- pm/common@21 -- $ date +%s 00:01:05.051 04:54:18 -- pm/common@21 -- $ date +%s 00:01:05.051 04:54:18 -- pm/common@21 -- $ date +%s 00:01:05.051 04:54:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716458 00:01:05.051 04:54:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716458 00:01:05.051 04:54:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716458 00:01:05.051 04:54:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716458 00:01:05.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716458_collect-cpu-load.pm.log 00:01:05.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716458_collect-vmstat.pm.log 00:01:05.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716458_collect-cpu-temp.pm.log 00:01:05.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716458_collect-bmc-pm.bmc.pm.log 00:01:05.995 04:54:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:05.995 04:54:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.995 04:54:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.995 04:54:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.995 04:54:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.995 Mon Dec 9 03:54:19 AM UTC 2024 00:01:05.995 04:54:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.995 v25.01-pre-311-ga2f5e1c2d 00:01:05.995 04:54:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:05.995 04:54:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:05.995 04:54:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:05.995 04:54:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:05.995 04:54:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.256 ************************************ 00:01:06.256 START TEST asan 00:01:06.256 ************************************ 00:01:06.256 04:54:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:06.256 using asan 00:01:06.256 00:01:06.256 real 0m0.001s 00:01:06.256 user 0m0.000s 00:01:06.256 sys 0m0.000s 00:01:06.256 04:54:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:06.256 04:54:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:06.256 ************************************ 00:01:06.256 END TEST asan 00:01:06.256 ************************************ 00:01:06.256 04:54:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:06.256 04:54:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:06.256 04:54:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:06.256 04:54:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:06.256 04:54:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.256 ************************************ 00:01:06.256 START TEST ubsan 00:01:06.256 ************************************ 00:01:06.256 04:54:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:06.256 using ubsan 00:01:06.256 00:01:06.256 real 0m0.001s 00:01:06.256 user 0m0.000s 00:01:06.256 sys 0m0.000s 00:01:06.256 04:54:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:06.256 04:54:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:06.256 ************************************ 00:01:06.256 END TEST ubsan 00:01:06.256 ************************************ 00:01:06.256 04:54:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:06.256 04:54:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:06.256 04:54:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:06.256 04:54:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:06.256 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.256 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.828 Using 'verbs' RDMA provider 00:01:22.684 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.920 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.441 Creating mk/config.mk...done. 00:01:35.441 Creating mk/cc.flags.mk...done. 00:01:35.441 Type 'make' to build. 00:01:35.441 04:54:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:35.441 04:54:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:35.441 04:54:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:35.441 04:54:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.701 ************************************ 00:01:35.701 START TEST make 00:01:35.701 ************************************ 00:01:35.701 04:54:49 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:35.962 make[1]: Nothing to be done for 'all'. 00:01:45.966 The Meson build system 00:01:45.966 Version: 1.5.0 00:01:45.966 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.966 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.966 Build type: native build 00:01:45.966 Program cat found: YES (/usr/bin/cat) 00:01:45.966 Project name: DPDK 00:01:45.966 Project version: 24.03.0 00:01:45.966 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.966 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.966 Host machine cpu family: x86_64 00:01:45.966 Host machine cpu: x86_64 00:01:45.966 Message: ## Building in Developer Mode ## 00:01:45.966 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.966 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.966 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.966 Program python3 found: YES (/usr/bin/python3) 00:01:45.966 Program cat found: YES (/usr/bin/cat) 00:01:45.966 Compiler for C supports arguments -march=native: YES 00:01:45.966 Checking for size of "void *" : 8 00:01:45.966 Checking for size of "void *" : 8 (cached) 00:01:45.966 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:45.966 Library m found: YES 00:01:45.966 Library numa found: YES 00:01:45.966 Has header "numaif.h" : YES 00:01:45.966 Library fdt found: NO 00:01:45.966 Library execinfo found: NO 00:01:45.966 Has header "execinfo.h" : YES 00:01:45.966 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.966 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.966 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.966 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.966 Run-time dependency openssl found: YES 3.1.1 00:01:45.966 Run-time dependency libpcap found: YES 1.10.4 00:01:45.966 Has header "pcap.h" with dependency libpcap: YES 00:01:45.966 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.966 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.966 Compiler for C supports arguments -Wformat: YES 00:01:45.966 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.966 Compiler for C supports arguments -Wformat-security: NO 00:01:45.966 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.966 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.966 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.966 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.966 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.966 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.966 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.966 Compiler for C supports arguments -Wundef: YES 00:01:45.966 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.966 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.966 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.966 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.966 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.966 Program objdump found: YES (/usr/bin/objdump) 00:01:45.966 Compiler for C supports arguments -mavx512f: YES 00:01:45.966 Checking if "AVX512 checking" compiles: YES 00:01:45.966 Fetching value of define "__SSE4_2__" : 1 00:01:45.966 Fetching value of define "__AES__" : 1 00:01:45.966 Fetching value of define "__AVX__" : 1 00:01:45.966 Fetching value of define "__AVX2__" : 1 00:01:45.966 Fetching value of define "__AVX512BW__" : 1 00:01:45.966 Fetching value of define "__AVX512CD__" : 1 00:01:45.966 Fetching value of define "__AVX512DQ__" : 1 00:01:45.966 Fetching value of define "__AVX512F__" : 1 00:01:45.966 Fetching value of define "__AVX512VL__" : 1 00:01:45.966 Fetching value of define "__PCLMUL__" : 1 00:01:45.966 Fetching value of define "__RDRND__" : 1 00:01:45.966 Fetching value of define "__RDSEED__" : 1 00:01:45.966 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:45.966 Fetching value of define "__znver1__" : (undefined) 00:01:45.966 Fetching value of define "__znver2__" : (undefined) 00:01:45.966 Fetching value of define "__znver3__" : (undefined) 00:01:45.966 Fetching value of define "__znver4__" : (undefined) 00:01:45.966 Library asan found: YES 00:01:45.966 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.966 Message: lib/log: Defining dependency "log" 00:01:45.966 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.966 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.966 Library rt found: YES 00:01:45.966 Checking for function "getentropy" : NO 00:01:45.966 Message: lib/eal: Defining dependency "eal" 00:01:45.966 Message: lib/ring: Defining dependency "ring" 00:01:45.966 Message: lib/rcu: Defining dependency "rcu" 00:01:45.966 Message: lib/mempool: Defining dependency "mempool" 00:01:45.966 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.966 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.966 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.966 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.966 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.966 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.966 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:45.966 Compiler for C supports arguments -mpclmul: YES 00:01:45.966 Compiler for C supports arguments -maes: YES 00:01:45.966 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.966 Compiler for C supports arguments -mavx512bw: YES 00:01:45.966 Compiler for C supports arguments -mavx512dq: YES 00:01:45.966 Compiler for C supports arguments -mavx512vl: YES 00:01:45.966 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.966 Compiler for C supports arguments -mavx2: YES 00:01:45.966 Compiler for C supports arguments -mavx: YES 00:01:45.966 Message: lib/net: Defining dependency "net" 00:01:45.966 Message: lib/meter: Defining dependency "meter" 00:01:45.966 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.966 Message: lib/pci: Defining dependency "pci" 00:01:45.966 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.966 Message: lib/hash: Defining dependency "hash" 00:01:45.966 Message: lib/timer: Defining dependency "timer" 00:01:45.966 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.966 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.966 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.966 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.966 Message: lib/power: Defining dependency "power" 00:01:45.966 Message: lib/reorder: Defining dependency "reorder" 00:01:45.966 Message: lib/security: Defining dependency "security" 00:01:45.966 Has header "linux/userfaultfd.h" : YES 00:01:45.966 Has header "linux/vduse.h" : YES 00:01:45.966 Message: lib/vhost: Defining dependency "vhost" 00:01:45.966 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.966 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.966 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.966 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.966 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.966 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.966 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.966 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.966 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.966 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.966 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:45.966 Configuring doxy-api-html.conf using configuration 00:01:45.966 Configuring doxy-api-man.conf using configuration 00:01:45.966 Program mandb found: YES (/usr/bin/mandb) 00:01:45.966 Program sphinx-build found: NO 00:01:45.966 Configuring rte_build_config.h using configuration 00:01:45.966 Message: 00:01:45.966 ================= 00:01:45.966 Applications Enabled 00:01:45.966 ================= 00:01:45.966 00:01:45.966 apps: 00:01:45.966 00:01:45.966 00:01:45.966 Message: 00:01:45.966 ================= 00:01:45.966 Libraries Enabled 00:01:45.966 ================= 00:01:45.966 00:01:45.966 libs: 00:01:45.966 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.966 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.966 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.966 00:01:45.966 Message: 00:01:45.966 =============== 00:01:45.966 Drivers Enabled 00:01:45.966 =============== 00:01:45.966 00:01:45.966 common: 00:01:45.966 00:01:45.966 bus: 00:01:45.966 pci, vdev, 00:01:45.966 mempool: 00:01:45.966 ring, 00:01:45.966 dma: 00:01:45.966 00:01:45.966 net: 00:01:45.966 00:01:45.966 crypto: 00:01:45.966 00:01:45.966 compress: 00:01:45.966 00:01:45.966 vdpa: 00:01:45.966 00:01:45.966 00:01:45.966 Message: 00:01:45.966 ================= 00:01:45.966 Content Skipped 00:01:45.966 ================= 00:01:45.966 00:01:45.966 apps: 00:01:45.966 dumpcap: explicitly disabled via build config 00:01:45.966 graph: explicitly disabled via build config 00:01:45.966 pdump: explicitly disabled via build config 00:01:45.967 proc-info: explicitly disabled via build config 00:01:45.967 test-acl: explicitly disabled via build config 00:01:45.967 test-bbdev: explicitly disabled via build config 00:01:45.967 test-cmdline: explicitly disabled via build config 00:01:45.967 test-compress-perf: explicitly disabled via build config 00:01:45.967 test-crypto-perf: explicitly disabled via build config 00:01:45.967 test-dma-perf: explicitly disabled via build config 00:01:45.967 test-eventdev: explicitly disabled via build config 00:01:45.967 test-fib: explicitly disabled via build config 00:01:45.967 test-flow-perf: explicitly disabled via build config 00:01:45.967 test-gpudev: explicitly disabled via build config 00:01:45.967 test-mldev: explicitly disabled via build config 00:01:45.967 test-pipeline: explicitly disabled via build config 00:01:45.967 test-pmd: explicitly disabled via build config 00:01:45.967 test-regex: explicitly disabled via build config 00:01:45.967 test-sad: explicitly disabled via build config 00:01:45.967 test-security-perf: explicitly disabled via build config 00:01:45.967 00:01:45.967 libs: 00:01:45.967 argparse: explicitly disabled via build config 00:01:45.967 metrics: explicitly disabled via build config 00:01:45.967 acl: explicitly disabled via build config 00:01:45.967 bbdev: explicitly disabled via build config 00:01:45.967 bitratestats: explicitly disabled via build config 00:01:45.967 bpf: explicitly disabled via build config 00:01:45.967 cfgfile: explicitly disabled via build config 00:01:45.967 distributor: explicitly disabled via build config 00:01:45.967 efd: explicitly disabled via build config 00:01:45.967 eventdev: explicitly disabled via build config 00:01:45.967 dispatcher: explicitly disabled via build config 00:01:45.967 gpudev: explicitly disabled via build config 00:01:45.967 gro: explicitly disabled via build config 00:01:45.967 gso: explicitly disabled via build config 00:01:45.967 ip_frag: explicitly disabled via build config 00:01:45.967 jobstats: explicitly disabled via build config 00:01:45.967 latencystats: explicitly disabled via build config 00:01:45.967 lpm: explicitly disabled via build config 00:01:45.967 member: explicitly disabled via build config 00:01:45.967 pcapng: explicitly disabled via build config 00:01:45.967 rawdev: explicitly disabled via build config 00:01:45.967 regexdev: explicitly disabled via build config 00:01:45.967 mldev: explicitly disabled via build config 00:01:45.967 rib: explicitly disabled via build config 00:01:45.967 sched: explicitly disabled via build config 00:01:45.967 stack: explicitly disabled via build config 00:01:45.967 ipsec: explicitly disabled via build config 00:01:45.967 pdcp: explicitly disabled via build config 00:01:45.967 fib: explicitly disabled via build config 00:01:45.967 port: explicitly disabled via build config 00:01:45.967 pdump: explicitly disabled via build config 00:01:45.967 table: explicitly disabled via build config 00:01:45.967 pipeline: explicitly disabled via build config 00:01:45.967 graph: explicitly disabled via build config 00:01:45.967 node: explicitly disabled via build config 00:01:45.967 00:01:45.967 drivers: 00:01:45.967 common/cpt: not in enabled drivers build config 00:01:45.967 common/dpaax: not in enabled drivers build config 00:01:45.967 common/iavf: not in enabled drivers build config 00:01:45.967 common/idpf: not in enabled drivers build config 00:01:45.967 common/ionic: not in enabled drivers build config 00:01:45.967 common/mvep: not in enabled drivers build config 00:01:45.967 common/octeontx: not in enabled drivers build config 00:01:45.967 bus/auxiliary: not in enabled drivers build config 00:01:45.967 bus/cdx: not in enabled drivers build config 00:01:45.967 bus/dpaa: not in enabled drivers build config 00:01:45.967 bus/fslmc: not in enabled drivers build config 00:01:45.967 bus/ifpga: not in enabled drivers build config 00:01:45.967 bus/platform: not in enabled drivers build config 00:01:45.967 bus/uacce: not in enabled drivers build config 00:01:45.967 bus/vmbus: not in enabled drivers build config 00:01:45.967 common/cnxk: not in enabled drivers build config 00:01:45.967 common/mlx5: not in enabled drivers build config 00:01:45.967 common/nfp: not in enabled drivers build config 00:01:45.967 common/nitrox: not in enabled drivers build config 00:01:45.967 common/qat: not in enabled drivers build config 00:01:45.967 common/sfc_efx: not in enabled drivers build config 00:01:45.967 mempool/bucket: not in enabled drivers build config 00:01:45.967 mempool/cnxk: not in enabled drivers build config 00:01:45.967 mempool/dpaa: not in enabled drivers build config 00:01:45.967 mempool/dpaa2: not in enabled drivers build config 00:01:45.967 mempool/octeontx: not in enabled drivers build config 00:01:45.967 mempool/stack: not in enabled drivers build config 00:01:45.967 dma/cnxk: not in enabled drivers build config 00:01:45.967 dma/dpaa: not in enabled drivers build config 00:01:45.967 dma/dpaa2: not in enabled drivers build config 00:01:45.967 dma/hisilicon: not in enabled drivers build config 00:01:45.967 dma/idxd: not in enabled drivers build config 00:01:45.967 dma/ioat: not in enabled drivers build config 00:01:45.967 dma/skeleton: not in enabled drivers build config 00:01:45.967 net/af_packet: not in enabled drivers build config 00:01:45.967 net/af_xdp: not in enabled drivers build config 00:01:45.967 net/ark: not in enabled drivers build config 00:01:45.967 net/atlantic: not in enabled drivers build config 00:01:45.967 net/avp: not in enabled drivers build config 00:01:45.967 net/axgbe: not in enabled drivers build config 00:01:45.967 net/bnx2x: not in enabled drivers build config 00:01:45.967 net/bnxt: not in enabled drivers build config 00:01:45.967 net/bonding: not in enabled drivers build config 00:01:45.967 net/cnxk: not in enabled drivers build config 00:01:45.967 net/cpfl: not in enabled drivers build config 00:01:45.967 net/cxgbe: not in enabled drivers build config 00:01:45.967 net/dpaa: not in enabled drivers build config 00:01:45.967 net/dpaa2: not in enabled drivers build config 00:01:45.967 net/e1000: not in enabled drivers build config 00:01:45.967 net/ena: not in enabled drivers build config 00:01:45.967 net/enetc: not in enabled drivers build config 00:01:45.967 net/enetfec: not in enabled drivers build config 00:01:45.967 net/enic: not in enabled drivers build config 00:01:45.967 net/failsafe: not in enabled drivers build config 00:01:45.967 net/fm10k: not in enabled drivers build config 00:01:45.967 net/gve: not in enabled drivers build config 00:01:45.967 net/hinic: not in enabled drivers build config 00:01:45.967 net/hns3: not in enabled drivers build config 00:01:45.967 net/i40e: not in enabled drivers build config 00:01:45.967 net/iavf: not in enabled drivers build config 00:01:45.967 net/ice: not in enabled drivers build config 00:01:45.967 net/idpf: not in enabled drivers build config 00:01:45.967 net/igc: not in enabled drivers build config 00:01:45.967 net/ionic: not in enabled drivers build config 00:01:45.967 net/ipn3ke: not in enabled drivers build config 00:01:45.967 net/ixgbe: not in enabled drivers build config 00:01:45.967 net/mana: not in enabled drivers build config 00:01:45.967 net/memif: not in enabled drivers build config 00:01:45.967 net/mlx4: not in enabled drivers build config 00:01:45.967 net/mlx5: not in enabled drivers build config 00:01:45.967 net/mvneta: not in enabled drivers build config 00:01:45.967 net/mvpp2: not in enabled drivers build config 00:01:45.967 net/netvsc: not in enabled drivers build config 00:01:45.967 net/nfb: not in enabled drivers build config 00:01:45.967 net/nfp: not in enabled drivers build config 00:01:45.967 net/ngbe: not in enabled drivers build config 00:01:45.967 net/null: not in enabled drivers build config 00:01:45.967 net/octeontx: not in enabled drivers build config 00:01:45.967 net/octeon_ep: not in enabled drivers build config 00:01:45.967 net/pcap: not in enabled drivers build config 00:01:45.967 net/pfe: not in enabled drivers build config 00:01:45.967 net/qede: not in enabled drivers build config 00:01:45.967 net/ring: not in enabled drivers build config 00:01:45.967 net/sfc: not in enabled drivers build config 00:01:45.967 net/softnic: not in enabled drivers build config 00:01:45.967 net/tap: not in enabled drivers build config 00:01:45.967 net/thunderx: not in enabled drivers build config 00:01:45.967 net/txgbe: not in enabled drivers build config 00:01:45.967 net/vdev_netvsc: not in enabled drivers build config 00:01:45.967 net/vhost: not in enabled drivers build config 00:01:45.967 net/virtio: not in enabled drivers build config 00:01:45.967 net/vmxnet3: not in enabled drivers build config 00:01:45.967 raw/*: missing internal dependency, "rawdev" 00:01:45.967 crypto/armv8: not in enabled drivers build config 00:01:45.967 crypto/bcmfs: not in enabled drivers build config 00:01:45.967 crypto/caam_jr: not in enabled drivers build config 00:01:45.967 crypto/ccp: not in enabled drivers build config 00:01:45.967 crypto/cnxk: not in enabled drivers build config 00:01:45.967 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.967 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.967 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.967 crypto/mlx5: not in enabled drivers build config 00:01:45.967 crypto/mvsam: not in enabled drivers build config 00:01:45.967 crypto/nitrox: not in enabled drivers build config 00:01:45.967 crypto/null: not in enabled drivers build config 00:01:45.967 crypto/octeontx: not in enabled drivers build config 00:01:45.967 crypto/openssl: not in enabled drivers build config 00:01:45.967 crypto/scheduler: not in enabled drivers build config 00:01:45.967 crypto/uadk: not in enabled drivers build config 00:01:45.967 crypto/virtio: not in enabled drivers build config 00:01:45.967 compress/isal: not in enabled drivers build config 00:01:45.968 compress/mlx5: not in enabled drivers build config 00:01:45.968 compress/nitrox: not in enabled drivers build config 00:01:45.968 compress/octeontx: not in enabled drivers build config 00:01:45.968 compress/zlib: not in enabled drivers build config 00:01:45.968 regex/*: missing internal dependency, "regexdev" 00:01:45.968 ml/*: missing internal dependency, "mldev" 00:01:45.968 vdpa/ifc: not in enabled drivers build config 00:01:45.968 vdpa/mlx5: not in enabled drivers build config 00:01:45.968 vdpa/nfp: not in enabled drivers build config 00:01:45.968 vdpa/sfc: not in enabled drivers build config 00:01:45.968 event/*: missing internal dependency, "eventdev" 00:01:45.968 baseband/*: missing internal dependency, "bbdev" 00:01:45.968 gpu/*: missing internal dependency, "gpudev" 00:01:45.968 00:01:45.968 00:01:45.968 Build targets in project: 84 00:01:45.968 00:01:45.968 DPDK 24.03.0 00:01:45.968 00:01:45.968 User defined options 00:01:45.968 buildtype : debug 00:01:45.968 default_library : shared 00:01:45.968 libdir : lib 00:01:45.968 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.968 b_sanitize : address 00:01:45.968 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:45.968 c_link_args : 00:01:45.968 cpu_instruction_set: native 00:01:45.968 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:45.968 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:45.968 enable_docs : false 00:01:45.968 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:45.968 enable_kmods : false 00:01:45.968 max_lcores : 128 00:01:45.968 tests : false 00:01:45.968 00:01:45.968 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.968 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.968 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.968 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.968 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.968 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.968 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.968 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.968 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.968 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.968 [9/267] Linking static target lib/librte_kvargs.a 00:01:45.968 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.968 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.968 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.968 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.968 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.968 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.968 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.968 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.968 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.968 [19/267] Linking static target lib/librte_log.a 00:01:45.968 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.968 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.968 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.968 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.968 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.968 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.968 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.968 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.968 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.968 [29/267] Linking static target lib/librte_pci.a 00:01:45.968 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.968 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.968 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.968 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.968 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.968 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.968 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.968 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.968 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.968 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.227 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.227 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.227 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.227 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.227 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.227 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.227 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.227 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.227 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.227 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.227 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.227 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.227 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.227 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.227 [54/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.227 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.227 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.227 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.227 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.227 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.227 [60/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.227 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.227 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.227 [63/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.227 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.227 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.227 [66/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.227 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.227 [68/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.227 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.227 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.227 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.227 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.227 [73/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.227 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.227 [75/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.227 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.227 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.227 [78/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.227 [79/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.227 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.227 [81/267] Linking static target lib/librte_meter.a 00:01:46.227 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.227 [83/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.227 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.227 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.227 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.227 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.227 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.227 [89/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.227 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.227 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.227 [92/267] Linking static target lib/librte_telemetry.a 00:01:46.227 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.227 [94/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.227 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.227 [96/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.227 [97/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.227 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.227 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.227 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.227 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.227 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.227 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.227 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.227 [105/267] Linking static target lib/librte_ring.a 00:01:46.227 [106/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.227 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.227 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.227 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:46.227 [110/267] Linking static target lib/librte_cmdline.a 00:01:46.227 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.227 [112/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.227 [113/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.227 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.227 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.227 [116/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.227 [117/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.227 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.227 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.227 [120/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.227 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.227 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.227 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.227 [124/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.227 [125/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.227 [126/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.227 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.227 [128/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.227 [129/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.227 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.227 [131/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.227 [132/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.497 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.497 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.497 [135/267] Linking target lib/librte_log.so.24.1 00:01:46.497 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.497 [137/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.497 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.497 [139/267] Linking static target lib/librte_timer.a 00:01:46.497 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.497 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.497 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.497 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.497 [144/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.497 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.497 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.497 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.497 [148/267] Linking static target lib/librte_dmadev.a 00:01:46.497 [149/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.497 [150/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.497 [151/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.497 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.497 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.497 [154/267] Linking static target lib/librte_rcu.a 00:01:46.497 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.497 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.497 [157/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.497 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.497 [159/267] Linking static target lib/librte_power.a 00:01:46.497 [160/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.497 [161/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.497 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.497 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.497 [164/267] Linking static target lib/librte_compressdev.a 00:01:46.497 [165/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.497 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.497 [167/267] Linking target lib/librte_kvargs.so.24.1 00:01:46.497 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.497 [169/267] Linking static target lib/librte_mempool.a 00:01:46.497 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.497 [171/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.497 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.497 [173/267] Linking static target lib/librte_net.a 00:01:46.497 [174/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.497 [175/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.497 [176/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.497 [177/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.497 [178/267] Linking static target lib/librte_reorder.a 00:01:46.758 [179/267] Linking static target lib/librte_eal.a 00:01:46.758 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.758 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.758 [182/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.758 [183/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.758 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.758 [185/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.758 [186/267] Linking static target drivers/librte_bus_vdev.a 00:01:46.758 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.758 [188/267] Linking static target lib/librte_security.a 00:01:46.758 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.758 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.758 [191/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.758 [192/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.758 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.758 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.758 [195/267] Linking static target drivers/librte_bus_pci.a 00:01:46.758 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.758 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.758 [198/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.758 [199/267] Linking static target drivers/librte_mempool_ring.a 00:01:46.758 [200/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.758 [201/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.758 [202/267] Linking static target lib/librte_mbuf.a 00:01:46.758 [203/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.019 [204/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.019 [205/267] Linking target lib/librte_telemetry.so.24.1 00:01:47.019 [206/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.019 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.019 [208/267] Linking static target lib/librte_hash.a 00:01:47.019 [209/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.019 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.019 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.280 [212/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.280 [213/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.280 [214/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.280 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.541 [216/267] Linking static target lib/librte_cryptodev.a 00:01:47.541 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.541 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.541 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.541 [220/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.803 [221/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.803 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.063 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.063 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.324 [225/267] Linking static target lib/librte_ethdev.a 00:01:48.324 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.713 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.101 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:51.101 [229/267] Linking static target lib/librte_vhost.a 00:01:53.016 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.311 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.311 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.311 [233/267] Linking target lib/librte_eal.so.24.1 00:01:58.311 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.571 [235/267] Linking target lib/librte_meter.so.24.1 00:01:58.571 [236/267] Linking target lib/librte_pci.so.24.1 00:01:58.571 [237/267] Linking target lib/librte_ring.so.24.1 00:01:58.571 [238/267] Linking target lib/librte_timer.so.24.1 00:01:58.571 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:58.571 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.571 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.571 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.571 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.571 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.571 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.571 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.571 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:58.571 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:58.832 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.832 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.832 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.832 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:59.092 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.092 [254/267] Linking target lib/librte_reorder.so.24.1 00:01:59.092 [255/267] Linking target lib/librte_net.so.24.1 00:01:59.092 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:59.092 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:59.092 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.092 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.353 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:59.353 [261/267] Linking target lib/librte_hash.so.24.1 00:01:59.353 [262/267] Linking target lib/librte_security.so.24.1 00:01:59.353 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:59.353 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.353 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.353 [266/267] Linking target lib/librte_power.so.24.1 00:01:59.353 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:59.353 INFO: autodetecting backend as ninja 00:01:59.353 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:02.656 CC lib/log/log.o 00:02:02.656 CC lib/log/log_flags.o 00:02:02.656 CC lib/log/log_deprecated.o 00:02:02.656 CC lib/ut_mock/mock.o 00:02:02.656 CC lib/ut/ut.o 00:02:02.919 LIB libspdk_ut.a 00:02:02.919 LIB libspdk_log.a 00:02:02.919 LIB libspdk_ut_mock.a 00:02:02.919 SO libspdk_log.so.7.1 00:02:02.919 SO libspdk_ut.so.2.0 00:02:02.919 SO libspdk_ut_mock.so.6.0 00:02:02.919 SYMLINK libspdk_ut.so 00:02:02.919 SYMLINK libspdk_ut_mock.so 00:02:02.919 SYMLINK libspdk_log.so 00:02:03.491 CC lib/dma/dma.o 00:02:03.491 CXX lib/trace_parser/trace.o 00:02:03.491 CC lib/util/base64.o 00:02:03.491 CC lib/ioat/ioat.o 00:02:03.491 CC lib/util/bit_array.o 00:02:03.491 CC lib/util/cpuset.o 00:02:03.491 CC lib/util/crc16.o 00:02:03.491 CC lib/util/crc32.o 00:02:03.491 CC lib/util/crc32c.o 00:02:03.491 CC lib/util/crc32_ieee.o 00:02:03.491 CC lib/util/crc64.o 00:02:03.491 CC lib/util/dif.o 00:02:03.491 CC lib/util/fd.o 00:02:03.491 CC lib/util/fd_group.o 00:02:03.491 CC lib/util/file.o 00:02:03.491 CC lib/util/hexlify.o 00:02:03.491 CC lib/util/iov.o 00:02:03.491 CC lib/util/math.o 00:02:03.491 CC lib/util/net.o 00:02:03.491 CC lib/util/pipe.o 00:02:03.491 CC lib/util/strerror_tls.o 00:02:03.491 CC lib/util/string.o 00:02:03.491 CC lib/util/uuid.o 00:02:03.491 CC lib/util/xor.o 00:02:03.491 CC lib/util/zipf.o 00:02:03.491 CC lib/util/md5.o 00:02:03.491 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.491 CC lib/vfio_user/host/vfio_user.o 00:02:03.491 LIB libspdk_dma.a 00:02:03.751 SO libspdk_dma.so.5.0 00:02:03.751 SYMLINK libspdk_dma.so 00:02:03.751 LIB libspdk_ioat.a 00:02:03.751 SO libspdk_ioat.so.7.0 00:02:03.751 SYMLINK libspdk_ioat.so 00:02:03.751 LIB libspdk_vfio_user.a 00:02:04.019 SO libspdk_vfio_user.so.5.0 00:02:04.019 SYMLINK libspdk_vfio_user.so 00:02:04.019 LIB libspdk_util.a 00:02:04.282 SO libspdk_util.so.10.1 00:02:04.282 LIB libspdk_trace_parser.a 00:02:04.282 SYMLINK libspdk_util.so 00:02:04.282 SO libspdk_trace_parser.so.6.0 00:02:04.544 SYMLINK libspdk_trace_parser.so 00:02:04.805 CC lib/rdma_utils/rdma_utils.o 00:02:04.805 CC lib/json/json_parse.o 00:02:04.805 CC lib/json/json_util.o 00:02:04.805 CC lib/json/json_write.o 00:02:04.805 CC lib/idxd/idxd.o 00:02:04.805 CC lib/idxd/idxd_user.o 00:02:04.805 CC lib/idxd/idxd_kernel.o 00:02:04.805 CC lib/vmd/vmd.o 00:02:04.805 CC lib/env_dpdk/env.o 00:02:04.805 CC lib/vmd/led.o 00:02:04.805 CC lib/env_dpdk/memory.o 00:02:04.805 CC lib/conf/conf.o 00:02:04.805 CC lib/env_dpdk/pci.o 00:02:04.805 CC lib/env_dpdk/init.o 00:02:04.805 CC lib/env_dpdk/threads.o 00:02:04.805 CC lib/env_dpdk/pci_ioat.o 00:02:04.805 CC lib/env_dpdk/pci_virtio.o 00:02:04.805 CC lib/env_dpdk/pci_vmd.o 00:02:04.805 CC lib/env_dpdk/pci_idxd.o 00:02:04.805 CC lib/env_dpdk/pci_event.o 00:02:04.805 CC lib/env_dpdk/sigbus_handler.o 00:02:04.805 CC lib/env_dpdk/pci_dpdk.o 00:02:04.805 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.805 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.066 LIB libspdk_conf.a 00:02:05.066 LIB libspdk_rdma_utils.a 00:02:05.066 SO libspdk_conf.so.6.0 00:02:05.066 LIB libspdk_json.a 00:02:05.066 SO libspdk_rdma_utils.so.1.0 00:02:05.066 SO libspdk_json.so.6.0 00:02:05.066 SYMLINK libspdk_conf.so 00:02:05.066 SYMLINK libspdk_rdma_utils.so 00:02:05.328 SYMLINK libspdk_json.so 00:02:05.589 LIB libspdk_idxd.a 00:02:05.589 LIB libspdk_vmd.a 00:02:05.589 CC lib/rdma_provider/common.o 00:02:05.589 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.589 SO libspdk_idxd.so.12.1 00:02:05.589 SO libspdk_vmd.so.6.0 00:02:05.589 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.589 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.589 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.589 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.589 SYMLINK libspdk_idxd.so 00:02:05.589 SYMLINK libspdk_vmd.so 00:02:05.851 LIB libspdk_rdma_provider.a 00:02:05.851 SO libspdk_rdma_provider.so.7.0 00:02:05.851 LIB libspdk_jsonrpc.a 00:02:05.851 SYMLINK libspdk_rdma_provider.so 00:02:05.851 SO libspdk_jsonrpc.so.6.0 00:02:06.112 SYMLINK libspdk_jsonrpc.so 00:02:06.373 CC lib/rpc/rpc.o 00:02:06.373 LIB libspdk_env_dpdk.a 00:02:06.373 SO libspdk_env_dpdk.so.15.1 00:02:06.634 LIB libspdk_rpc.a 00:02:06.634 SYMLINK libspdk_env_dpdk.so 00:02:06.634 SO libspdk_rpc.so.6.0 00:02:06.634 SYMLINK libspdk_rpc.so 00:02:07.207 CC lib/trace/trace.o 00:02:07.207 CC lib/trace/trace_flags.o 00:02:07.207 CC lib/trace/trace_rpc.o 00:02:07.207 CC lib/keyring/keyring.o 00:02:07.207 CC lib/keyring/keyring_rpc.o 00:02:07.207 CC lib/notify/notify.o 00:02:07.207 CC lib/notify/notify_rpc.o 00:02:07.207 LIB libspdk_notify.a 00:02:07.207 SO libspdk_notify.so.6.0 00:02:07.468 LIB libspdk_keyring.a 00:02:07.468 LIB libspdk_trace.a 00:02:07.468 SO libspdk_keyring.so.2.0 00:02:07.468 SYMLINK libspdk_notify.so 00:02:07.468 SO libspdk_trace.so.11.0 00:02:07.468 SYMLINK libspdk_keyring.so 00:02:07.468 SYMLINK libspdk_trace.so 00:02:08.040 CC lib/thread/thread.o 00:02:08.040 CC lib/thread/iobuf.o 00:02:08.040 CC lib/sock/sock.o 00:02:08.040 CC lib/sock/sock_rpc.o 00:02:08.301 LIB libspdk_sock.a 00:02:08.301 SO libspdk_sock.so.10.0 00:02:08.563 SYMLINK libspdk_sock.so 00:02:08.824 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.824 CC lib/nvme/nvme_ctrlr.o 00:02:08.824 CC lib/nvme/nvme_fabric.o 00:02:08.824 CC lib/nvme/nvme_ns_cmd.o 00:02:08.824 CC lib/nvme/nvme_ns.o 00:02:08.824 CC lib/nvme/nvme_pcie_common.o 00:02:08.824 CC lib/nvme/nvme_pcie.o 00:02:08.824 CC lib/nvme/nvme_qpair.o 00:02:08.824 CC lib/nvme/nvme.o 00:02:08.824 CC lib/nvme/nvme_quirks.o 00:02:08.824 CC lib/nvme/nvme_transport.o 00:02:08.824 CC lib/nvme/nvme_discovery.o 00:02:08.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.824 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.824 CC lib/nvme/nvme_tcp.o 00:02:08.824 CC lib/nvme/nvme_opal.o 00:02:08.824 CC lib/nvme/nvme_io_msg.o 00:02:08.824 CC lib/nvme/nvme_poll_group.o 00:02:08.824 CC lib/nvme/nvme_zns.o 00:02:08.824 CC lib/nvme/nvme_stubs.o 00:02:08.824 CC lib/nvme/nvme_auth.o 00:02:08.824 CC lib/nvme/nvme_cuse.o 00:02:08.824 CC lib/nvme/nvme_rdma.o 00:02:09.768 LIB libspdk_thread.a 00:02:09.768 SO libspdk_thread.so.11.0 00:02:09.768 SYMLINK libspdk_thread.so 00:02:10.029 CC lib/blob/blobstore.o 00:02:10.029 CC lib/accel/accel.o 00:02:10.029 CC lib/blob/request.o 00:02:10.029 CC lib/accel/accel_rpc.o 00:02:10.029 CC lib/blob/zeroes.o 00:02:10.029 CC lib/accel/accel_sw.o 00:02:10.029 CC lib/fsdev/fsdev.o 00:02:10.029 CC lib/blob/blob_bs_dev.o 00:02:10.029 CC lib/fsdev/fsdev_io.o 00:02:10.029 CC lib/fsdev/fsdev_rpc.o 00:02:10.029 CC lib/virtio/virtio.o 00:02:10.029 CC lib/init/json_config.o 00:02:10.029 CC lib/virtio/virtio_vhost_user.o 00:02:10.029 CC lib/init/subsystem.o 00:02:10.029 CC lib/virtio/virtio_vfio_user.o 00:02:10.029 CC lib/init/subsystem_rpc.o 00:02:10.029 CC lib/init/rpc.o 00:02:10.029 CC lib/virtio/virtio_pci.o 00:02:10.290 LIB libspdk_init.a 00:02:10.551 SO libspdk_init.so.6.0 00:02:10.551 LIB libspdk_virtio.a 00:02:10.551 SYMLINK libspdk_init.so 00:02:10.551 SO libspdk_virtio.so.7.0 00:02:10.551 SYMLINK libspdk_virtio.so 00:02:10.812 LIB libspdk_fsdev.a 00:02:10.812 CC lib/event/app.o 00:02:10.812 CC lib/event/reactor.o 00:02:10.812 CC lib/event/log_rpc.o 00:02:10.812 CC lib/event/app_rpc.o 00:02:10.812 CC lib/event/scheduler_static.o 00:02:10.812 SO libspdk_fsdev.so.2.0 00:02:11.073 SYMLINK libspdk_fsdev.so 00:02:11.337 LIB libspdk_nvme.a 00:02:11.337 LIB libspdk_accel.a 00:02:11.337 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:11.337 SO libspdk_nvme.so.15.0 00:02:11.337 SO libspdk_accel.so.16.0 00:02:11.337 LIB libspdk_event.a 00:02:11.597 SO libspdk_event.so.14.0 00:02:11.598 SYMLINK libspdk_accel.so 00:02:11.598 SYMLINK libspdk_event.so 00:02:11.598 SYMLINK libspdk_nvme.so 00:02:11.859 CC lib/bdev/bdev.o 00:02:11.859 CC lib/bdev/bdev_rpc.o 00:02:11.859 CC lib/bdev/bdev_zone.o 00:02:11.859 CC lib/bdev/part.o 00:02:11.859 CC lib/bdev/scsi_nvme.o 00:02:12.120 LIB libspdk_fuse_dispatcher.a 00:02:12.120 SO libspdk_fuse_dispatcher.so.1.0 00:02:12.120 SYMLINK libspdk_fuse_dispatcher.so 00:02:13.508 LIB libspdk_blob.a 00:02:13.508 SO libspdk_blob.so.12.0 00:02:13.508 SYMLINK libspdk_blob.so 00:02:14.081 CC lib/lvol/lvol.o 00:02:14.081 CC lib/blobfs/blobfs.o 00:02:14.081 CC lib/blobfs/tree.o 00:02:15.026 LIB libspdk_blobfs.a 00:02:15.026 SO libspdk_blobfs.so.11.0 00:02:15.026 LIB libspdk_bdev.a 00:02:15.026 SYMLINK libspdk_blobfs.so 00:02:15.026 SO libspdk_bdev.so.17.0 00:02:15.026 LIB libspdk_lvol.a 00:02:15.026 SO libspdk_lvol.so.11.0 00:02:15.026 SYMLINK libspdk_lvol.so 00:02:15.026 SYMLINK libspdk_bdev.so 00:02:15.597 CC lib/scsi/dev.o 00:02:15.597 CC lib/nvmf/ctrlr.o 00:02:15.598 CC lib/scsi/lun.o 00:02:15.598 CC lib/nvmf/ctrlr_discovery.o 00:02:15.598 CC lib/scsi/port.o 00:02:15.598 CC lib/nbd/nbd.o 00:02:15.598 CC lib/nvmf/ctrlr_bdev.o 00:02:15.598 CC lib/scsi/scsi.o 00:02:15.598 CC lib/nbd/nbd_rpc.o 00:02:15.598 CC lib/nvmf/subsystem.o 00:02:15.598 CC lib/scsi/scsi_bdev.o 00:02:15.598 CC lib/nvmf/nvmf.o 00:02:15.598 CC lib/scsi/scsi_pr.o 00:02:15.598 CC lib/nvmf/nvmf_rpc.o 00:02:15.598 CC lib/scsi/scsi_rpc.o 00:02:15.598 CC lib/nvmf/transport.o 00:02:15.598 CC lib/scsi/task.o 00:02:15.598 CC lib/nvmf/tcp.o 00:02:15.598 CC lib/ftl/ftl_core.o 00:02:15.598 CC lib/nvmf/stubs.o 00:02:15.598 CC lib/nvmf/mdns_server.o 00:02:15.598 CC lib/ftl/ftl_init.o 00:02:15.598 CC lib/nvmf/rdma.o 00:02:15.598 CC lib/ublk/ublk.o 00:02:15.598 CC lib/nvmf/auth.o 00:02:15.598 CC lib/ftl/ftl_layout.o 00:02:15.598 CC lib/ublk/ublk_rpc.o 00:02:15.598 CC lib/ftl/ftl_debug.o 00:02:15.598 CC lib/ftl/ftl_io.o 00:02:15.598 CC lib/ftl/ftl_sb.o 00:02:15.598 CC lib/ftl/ftl_l2p.o 00:02:15.598 CC lib/ftl/ftl_l2p_flat.o 00:02:15.598 CC lib/ftl/ftl_nv_cache.o 00:02:15.598 CC lib/ftl/ftl_band.o 00:02:15.598 CC lib/ftl/ftl_band_ops.o 00:02:15.598 CC lib/ftl/ftl_writer.o 00:02:15.598 CC lib/ftl/ftl_rq.o 00:02:15.598 CC lib/ftl/ftl_reloc.o 00:02:15.598 CC lib/ftl/ftl_l2p_cache.o 00:02:15.598 CC lib/ftl/ftl_p2l.o 00:02:15.598 CC lib/ftl/ftl_p2l_log.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.598 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.598 CC lib/ftl/utils/ftl_conf.o 00:02:15.598 CC lib/ftl/utils/ftl_md.o 00:02:15.598 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.598 CC lib/ftl/utils/ftl_mempool.o 00:02:15.598 CC lib/ftl/utils/ftl_property.o 00:02:15.598 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.598 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.598 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.598 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.598 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.598 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.598 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:15.598 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:15.598 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.598 CC lib/ftl/base/ftl_base_dev.o 00:02:15.598 CC lib/ftl/ftl_trace.o 00:02:16.167 LIB libspdk_nbd.a 00:02:16.167 SO libspdk_nbd.so.7.0 00:02:16.167 SYMLINK libspdk_nbd.so 00:02:16.167 LIB libspdk_scsi.a 00:02:16.429 SO libspdk_scsi.so.9.0 00:02:16.429 LIB libspdk_ublk.a 00:02:16.429 SYMLINK libspdk_scsi.so 00:02:16.429 SO libspdk_ublk.so.3.0 00:02:16.429 SYMLINK libspdk_ublk.so 00:02:16.689 CC lib/vhost/vhost.o 00:02:16.689 CC lib/vhost/vhost_rpc.o 00:02:16.689 CC lib/vhost/vhost_scsi.o 00:02:16.689 CC lib/vhost/vhost_blk.o 00:02:16.689 CC lib/iscsi/conn.o 00:02:16.689 CC lib/vhost/rte_vhost_user.o 00:02:16.689 CC lib/iscsi/init_grp.o 00:02:16.689 CC lib/iscsi/iscsi.o 00:02:16.689 CC lib/iscsi/param.o 00:02:16.689 CC lib/iscsi/portal_grp.o 00:02:16.689 CC lib/iscsi/tgt_node.o 00:02:16.689 CC lib/iscsi/iscsi_subsystem.o 00:02:16.689 CC lib/iscsi/iscsi_rpc.o 00:02:16.689 CC lib/iscsi/task.o 00:02:16.689 LIB libspdk_ftl.a 00:02:16.948 SO libspdk_ftl.so.9.0 00:02:17.209 SYMLINK libspdk_ftl.so 00:02:17.781 LIB libspdk_vhost.a 00:02:18.041 SO libspdk_vhost.so.8.0 00:02:18.041 SYMLINK libspdk_vhost.so 00:02:18.303 LIB libspdk_nvmf.a 00:02:18.303 SO libspdk_nvmf.so.20.0 00:02:18.303 LIB libspdk_iscsi.a 00:02:18.303 SO libspdk_iscsi.so.8.0 00:02:18.564 SYMLINK libspdk_nvmf.so 00:02:18.564 SYMLINK libspdk_iscsi.so 00:02:19.135 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.397 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.397 LIB libspdk_env_dpdk_rpc.a 00:02:19.397 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.397 CC module/accel/dsa/accel_dsa.o 00:02:19.397 CC module/accel/error/accel_error.o 00:02:19.397 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.397 CC module/accel/error/accel_error_rpc.o 00:02:19.397 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.397 CC module/fsdev/aio/fsdev_aio.o 00:02:19.397 CC module/accel/ioat/accel_ioat.o 00:02:19.397 CC module/fsdev/aio/linux_aio_mgr.o 00:02:19.397 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:19.397 CC module/accel/iaa/accel_iaa.o 00:02:19.397 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.397 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.397 CC module/sock/posix/posix.o 00:02:19.397 CC module/keyring/file/keyring.o 00:02:19.397 CC module/keyring/linux/keyring.o 00:02:19.397 CC module/keyring/file/keyring_rpc.o 00:02:19.397 CC module/blob/bdev/blob_bdev.o 00:02:19.397 CC module/keyring/linux/keyring_rpc.o 00:02:19.397 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.397 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.659 LIB libspdk_keyring_linux.a 00:02:19.659 LIB libspdk_keyring_file.a 00:02:19.659 LIB libspdk_scheduler_gscheduler.a 00:02:19.659 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.659 LIB libspdk_scheduler_dynamic.a 00:02:19.659 SO libspdk_keyring_file.so.2.0 00:02:19.659 SO libspdk_keyring_linux.so.1.0 00:02:19.659 LIB libspdk_accel_error.a 00:02:19.659 LIB libspdk_accel_ioat.a 00:02:19.659 SO libspdk_scheduler_gscheduler.so.4.0 00:02:19.659 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:19.659 SO libspdk_scheduler_dynamic.so.4.0 00:02:19.659 LIB libspdk_accel_iaa.a 00:02:19.659 SO libspdk_accel_error.so.2.0 00:02:19.659 SO libspdk_accel_ioat.so.6.0 00:02:19.659 SYMLINK libspdk_keyring_file.so 00:02:19.659 SYMLINK libspdk_keyring_linux.so 00:02:19.659 SO libspdk_accel_iaa.so.3.0 00:02:19.659 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.659 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.659 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.659 LIB libspdk_accel_dsa.a 00:02:19.659 LIB libspdk_blob_bdev.a 00:02:19.659 SYMLINK libspdk_accel_ioat.so 00:02:19.659 SYMLINK libspdk_accel_error.so 00:02:19.659 SYMLINK libspdk_accel_iaa.so 00:02:19.659 SO libspdk_accel_dsa.so.5.0 00:02:19.659 SO libspdk_blob_bdev.so.12.0 00:02:19.921 SYMLINK libspdk_blob_bdev.so 00:02:19.921 SYMLINK libspdk_accel_dsa.so 00:02:20.182 LIB libspdk_fsdev_aio.a 00:02:20.182 SO libspdk_fsdev_aio.so.1.0 00:02:20.182 LIB libspdk_sock_posix.a 00:02:20.182 SO libspdk_sock_posix.so.6.0 00:02:20.443 SYMLINK libspdk_fsdev_aio.so 00:02:20.443 CC module/bdev/error/vbdev_error.o 00:02:20.443 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.443 CC module/bdev/delay/vbdev_delay.o 00:02:20.443 CC module/bdev/gpt/gpt.o 00:02:20.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.443 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.443 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.443 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.443 CC module/bdev/null/bdev_null.o 00:02:20.443 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.443 CC module/bdev/null/bdev_null_rpc.o 00:02:20.443 CC module/bdev/nvme/bdev_nvme.o 00:02:20.443 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.443 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.443 CC module/bdev/nvme/nvme_rpc.o 00:02:20.443 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.443 CC module/bdev/nvme/vbdev_opal.o 00:02:20.443 CC module/bdev/split/vbdev_split.o 00:02:20.443 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.443 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.443 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.443 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.444 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.444 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.444 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.444 CC module/bdev/ftl/bdev_ftl.o 00:02:20.444 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.444 CC module/bdev/malloc/bdev_malloc.o 00:02:20.444 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.444 CC module/bdev/raid/bdev_raid.o 00:02:20.444 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.444 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.444 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.444 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.444 CC module/bdev/raid/raid0.o 00:02:20.444 CC module/bdev/aio/bdev_aio.o 00:02:20.444 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.444 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.444 CC module/bdev/raid/raid1.o 00:02:20.444 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.444 CC module/bdev/raid/concat.o 00:02:20.444 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.444 SYMLINK libspdk_sock_posix.so 00:02:20.704 LIB libspdk_blobfs_bdev.a 00:02:20.704 SO libspdk_blobfs_bdev.so.6.0 00:02:20.704 LIB libspdk_bdev_split.a 00:02:20.704 LIB libspdk_bdev_error.a 00:02:20.704 LIB libspdk_bdev_gpt.a 00:02:20.704 LIB libspdk_bdev_null.a 00:02:20.704 SO libspdk_bdev_split.so.6.0 00:02:20.704 SO libspdk_bdev_error.so.6.0 00:02:20.704 SO libspdk_bdev_gpt.so.6.0 00:02:20.704 SYMLINK libspdk_blobfs_bdev.so 00:02:20.704 SO libspdk_bdev_null.so.6.0 00:02:20.704 LIB libspdk_bdev_passthru.a 00:02:20.965 LIB libspdk_bdev_ftl.a 00:02:20.965 SYMLINK libspdk_bdev_split.so 00:02:20.965 SO libspdk_bdev_passthru.so.6.0 00:02:20.965 SYMLINK libspdk_bdev_error.so 00:02:20.965 LIB libspdk_bdev_zone_block.a 00:02:20.965 SYMLINK libspdk_bdev_gpt.so 00:02:20.965 LIB libspdk_bdev_aio.a 00:02:20.965 SO libspdk_bdev_ftl.so.6.0 00:02:20.965 SYMLINK libspdk_bdev_null.so 00:02:20.965 LIB libspdk_bdev_delay.a 00:02:20.965 SO libspdk_bdev_zone_block.so.6.0 00:02:20.965 LIB libspdk_bdev_iscsi.a 00:02:20.965 SO libspdk_bdev_aio.so.6.0 00:02:20.965 LIB libspdk_bdev_malloc.a 00:02:20.965 SO libspdk_bdev_delay.so.6.0 00:02:20.965 SYMLINK libspdk_bdev_passthru.so 00:02:20.965 SO libspdk_bdev_iscsi.so.6.0 00:02:20.965 SYMLINK libspdk_bdev_ftl.so 00:02:20.965 SO libspdk_bdev_malloc.so.6.0 00:02:20.965 SYMLINK libspdk_bdev_zone_block.so 00:02:20.965 SYMLINK libspdk_bdev_aio.so 00:02:20.965 SYMLINK libspdk_bdev_delay.so 00:02:20.965 SYMLINK libspdk_bdev_iscsi.so 00:02:20.965 SYMLINK libspdk_bdev_malloc.so 00:02:20.965 LIB libspdk_bdev_lvol.a 00:02:20.965 LIB libspdk_bdev_virtio.a 00:02:21.226 SO libspdk_bdev_lvol.so.6.0 00:02:21.226 SO libspdk_bdev_virtio.so.6.0 00:02:21.226 SYMLINK libspdk_bdev_lvol.so 00:02:21.226 SYMLINK libspdk_bdev_virtio.so 00:02:21.797 LIB libspdk_bdev_raid.a 00:02:21.797 SO libspdk_bdev_raid.so.6.0 00:02:21.797 SYMLINK libspdk_bdev_raid.so 00:02:23.713 LIB libspdk_bdev_nvme.a 00:02:23.713 SO libspdk_bdev_nvme.so.7.1 00:02:23.713 SYMLINK libspdk_bdev_nvme.so 00:02:24.285 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.285 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.285 CC module/event/subsystems/sock/sock.o 00:02:24.285 CC module/event/subsystems/keyring/keyring.o 00:02:24.285 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.285 CC module/event/subsystems/vmd/vmd.o 00:02:24.285 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.285 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.285 CC module/event/subsystems/fsdev/fsdev.o 00:02:24.547 LIB libspdk_event_keyring.a 00:02:24.547 LIB libspdk_event_vhost_blk.a 00:02:24.547 LIB libspdk_event_sock.a 00:02:24.547 LIB libspdk_event_vmd.a 00:02:24.547 LIB libspdk_event_scheduler.a 00:02:24.547 LIB libspdk_event_fsdev.a 00:02:24.547 LIB libspdk_event_iobuf.a 00:02:24.547 SO libspdk_event_keyring.so.1.0 00:02:24.547 SO libspdk_event_vhost_blk.so.3.0 00:02:24.547 SO libspdk_event_sock.so.5.0 00:02:24.547 SO libspdk_event_vmd.so.6.0 00:02:24.547 SO libspdk_event_scheduler.so.4.0 00:02:24.547 SO libspdk_event_fsdev.so.1.0 00:02:24.547 SO libspdk_event_iobuf.so.3.0 00:02:24.547 SYMLINK libspdk_event_keyring.so 00:02:24.547 SYMLINK libspdk_event_vhost_blk.so 00:02:24.547 SYMLINK libspdk_event_sock.so 00:02:24.808 SYMLINK libspdk_event_vmd.so 00:02:24.808 SYMLINK libspdk_event_scheduler.so 00:02:24.808 SYMLINK libspdk_event_fsdev.so 00:02:24.808 SYMLINK libspdk_event_iobuf.so 00:02:25.069 CC module/event/subsystems/accel/accel.o 00:02:25.331 LIB libspdk_event_accel.a 00:02:25.331 SO libspdk_event_accel.so.6.0 00:02:25.331 SYMLINK libspdk_event_accel.so 00:02:25.593 CC module/event/subsystems/bdev/bdev.o 00:02:25.854 LIB libspdk_event_bdev.a 00:02:25.854 SO libspdk_event_bdev.so.6.0 00:02:25.854 SYMLINK libspdk_event_bdev.so 00:02:26.427 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.427 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.427 CC module/event/subsystems/scsi/scsi.o 00:02:26.427 CC module/event/subsystems/ublk/ublk.o 00:02:26.427 CC module/event/subsystems/nbd/nbd.o 00:02:26.427 LIB libspdk_event_ublk.a 00:02:26.427 LIB libspdk_event_nbd.a 00:02:26.427 LIB libspdk_event_scsi.a 00:02:26.427 SO libspdk_event_ublk.so.3.0 00:02:26.427 SO libspdk_event_nbd.so.6.0 00:02:26.427 SO libspdk_event_scsi.so.6.0 00:02:26.690 LIB libspdk_event_nvmf.a 00:02:26.690 SYMLINK libspdk_event_ublk.so 00:02:26.690 SYMLINK libspdk_event_nbd.so 00:02:26.690 SO libspdk_event_nvmf.so.6.0 00:02:26.690 SYMLINK libspdk_event_scsi.so 00:02:26.690 SYMLINK libspdk_event_nvmf.so 00:02:26.951 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.951 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:27.213 LIB libspdk_event_vhost_scsi.a 00:02:27.213 LIB libspdk_event_iscsi.a 00:02:27.213 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.214 SO libspdk_event_iscsi.so.6.0 00:02:27.214 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.214 SYMLINK libspdk_event_iscsi.so 00:02:27.477 SO libspdk.so.6.0 00:02:27.477 SYMLINK libspdk.so 00:02:27.739 CXX app/trace/trace.o 00:02:27.739 CC app/trace_record/trace_record.o 00:02:27.739 CC app/spdk_nvme_perf/perf.o 00:02:27.739 CC app/spdk_lspci/spdk_lspci.o 00:02:27.739 CC test/rpc_client/rpc_client_test.o 00:02:28.002 CC app/spdk_top/spdk_top.o 00:02:28.002 TEST_HEADER include/spdk/accel.h 00:02:28.002 CC app/spdk_nvme_identify/identify.o 00:02:28.002 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.002 TEST_HEADER include/spdk/assert.h 00:02:28.002 TEST_HEADER include/spdk/accel_module.h 00:02:28.002 TEST_HEADER include/spdk/barrier.h 00:02:28.002 TEST_HEADER include/spdk/base64.h 00:02:28.002 TEST_HEADER include/spdk/bdev.h 00:02:28.002 TEST_HEADER include/spdk/bdev_module.h 00:02:28.002 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.002 TEST_HEADER include/spdk/bit_array.h 00:02:28.002 TEST_HEADER include/spdk/bit_pool.h 00:02:28.002 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.002 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.002 TEST_HEADER include/spdk/blobfs.h 00:02:28.002 TEST_HEADER include/spdk/blob.h 00:02:28.002 TEST_HEADER include/spdk/conf.h 00:02:28.002 TEST_HEADER include/spdk/config.h 00:02:28.002 TEST_HEADER include/spdk/cpuset.h 00:02:28.002 TEST_HEADER include/spdk/crc16.h 00:02:28.002 TEST_HEADER include/spdk/crc32.h 00:02:28.002 TEST_HEADER include/spdk/crc64.h 00:02:28.002 TEST_HEADER include/spdk/dif.h 00:02:28.002 TEST_HEADER include/spdk/dma.h 00:02:28.002 TEST_HEADER include/spdk/endian.h 00:02:28.002 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.002 TEST_HEADER include/spdk/env.h 00:02:28.002 TEST_HEADER include/spdk/event.h 00:02:28.002 TEST_HEADER include/spdk/fd_group.h 00:02:28.002 CC app/nvmf_tgt/nvmf_main.o 00:02:28.002 TEST_HEADER include/spdk/fd.h 00:02:28.002 TEST_HEADER include/spdk/file.h 00:02:28.002 TEST_HEADER include/spdk/fsdev.h 00:02:28.002 CC app/spdk_dd/spdk_dd.o 00:02:28.002 TEST_HEADER include/spdk/fsdev_module.h 00:02:28.002 TEST_HEADER include/spdk/ftl.h 00:02:28.002 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.002 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:28.002 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.002 TEST_HEADER include/spdk/hexlify.h 00:02:28.002 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.002 TEST_HEADER include/spdk/histogram_data.h 00:02:28.002 TEST_HEADER include/spdk/idxd.h 00:02:28.002 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.002 TEST_HEADER include/spdk/init.h 00:02:28.002 TEST_HEADER include/spdk/ioat.h 00:02:28.002 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.002 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.002 TEST_HEADER include/spdk/json.h 00:02:28.002 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.002 TEST_HEADER include/spdk/keyring.h 00:02:28.003 TEST_HEADER include/spdk/keyring_module.h 00:02:28.003 TEST_HEADER include/spdk/likely.h 00:02:28.003 TEST_HEADER include/spdk/log.h 00:02:28.003 CC app/spdk_tgt/spdk_tgt.o 00:02:28.003 TEST_HEADER include/spdk/lvol.h 00:02:28.003 TEST_HEADER include/spdk/md5.h 00:02:28.003 TEST_HEADER include/spdk/mmio.h 00:02:28.003 TEST_HEADER include/spdk/memory.h 00:02:28.003 TEST_HEADER include/spdk/net.h 00:02:28.003 TEST_HEADER include/spdk/nbd.h 00:02:28.003 TEST_HEADER include/spdk/nvme.h 00:02:28.003 TEST_HEADER include/spdk/notify.h 00:02:28.003 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.003 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.003 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.003 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.003 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.003 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.003 TEST_HEADER include/spdk/nvmf.h 00:02:28.003 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.003 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.003 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.003 TEST_HEADER include/spdk/opal.h 00:02:28.003 TEST_HEADER include/spdk/pci_ids.h 00:02:28.003 TEST_HEADER include/spdk/opal_spec.h 00:02:28.003 TEST_HEADER include/spdk/queue.h 00:02:28.003 TEST_HEADER include/spdk/pipe.h 00:02:28.003 TEST_HEADER include/spdk/reduce.h 00:02:28.003 TEST_HEADER include/spdk/rpc.h 00:02:28.003 TEST_HEADER include/spdk/scsi.h 00:02:28.003 TEST_HEADER include/spdk/scheduler.h 00:02:28.003 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.003 TEST_HEADER include/spdk/sock.h 00:02:28.003 TEST_HEADER include/spdk/stdinc.h 00:02:28.003 TEST_HEADER include/spdk/string.h 00:02:28.003 TEST_HEADER include/spdk/thread.h 00:02:28.003 TEST_HEADER include/spdk/trace.h 00:02:28.003 TEST_HEADER include/spdk/trace_parser.h 00:02:28.003 TEST_HEADER include/spdk/tree.h 00:02:28.003 TEST_HEADER include/spdk/ublk.h 00:02:28.003 TEST_HEADER include/spdk/util.h 00:02:28.003 TEST_HEADER include/spdk/uuid.h 00:02:28.003 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.003 TEST_HEADER include/spdk/version.h 00:02:28.003 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.003 TEST_HEADER include/spdk/vhost.h 00:02:28.003 TEST_HEADER include/spdk/vmd.h 00:02:28.003 TEST_HEADER include/spdk/xor.h 00:02:28.003 TEST_HEADER include/spdk/zipf.h 00:02:28.003 CXX test/cpp_headers/accel.o 00:02:28.003 CXX test/cpp_headers/accel_module.o 00:02:28.003 CXX test/cpp_headers/assert.o 00:02:28.003 CXX test/cpp_headers/barrier.o 00:02:28.003 CXX test/cpp_headers/base64.o 00:02:28.003 CXX test/cpp_headers/bdev.o 00:02:28.003 CXX test/cpp_headers/bdev_module.o 00:02:28.003 CXX test/cpp_headers/bdev_zone.o 00:02:28.003 CXX test/cpp_headers/bit_array.o 00:02:28.003 CXX test/cpp_headers/blob_bdev.o 00:02:28.003 CXX test/cpp_headers/bit_pool.o 00:02:28.003 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.003 CXX test/cpp_headers/blob.o 00:02:28.003 CXX test/cpp_headers/blobfs.o 00:02:28.003 CXX test/cpp_headers/conf.o 00:02:28.003 CXX test/cpp_headers/cpuset.o 00:02:28.003 CXX test/cpp_headers/config.o 00:02:28.003 CXX test/cpp_headers/crc16.o 00:02:28.003 CXX test/cpp_headers/crc32.o 00:02:28.003 CXX test/cpp_headers/crc64.o 00:02:28.003 CXX test/cpp_headers/dif.o 00:02:28.003 CXX test/cpp_headers/dma.o 00:02:28.003 CXX test/cpp_headers/endian.o 00:02:28.003 CXX test/cpp_headers/env_dpdk.o 00:02:28.003 CXX test/cpp_headers/env.o 00:02:28.003 CXX test/cpp_headers/fd_group.o 00:02:28.003 CXX test/cpp_headers/event.o 00:02:28.003 CXX test/cpp_headers/fd.o 00:02:28.003 CXX test/cpp_headers/file.o 00:02:28.003 CXX test/cpp_headers/fsdev.o 00:02:28.003 CXX test/cpp_headers/fsdev_module.o 00:02:28.003 CXX test/cpp_headers/fuse_dispatcher.o 00:02:28.003 CXX test/cpp_headers/ftl.o 00:02:28.003 CXX test/cpp_headers/gpt_spec.o 00:02:28.003 CXX test/cpp_headers/idxd.o 00:02:28.003 CXX test/cpp_headers/hexlify.o 00:02:28.003 CXX test/cpp_headers/histogram_data.o 00:02:28.003 CXX test/cpp_headers/idxd_spec.o 00:02:28.003 CXX test/cpp_headers/init.o 00:02:28.003 CXX test/cpp_headers/ioat.o 00:02:28.003 CXX test/cpp_headers/iscsi_spec.o 00:02:28.003 CXX test/cpp_headers/ioat_spec.o 00:02:28.003 CXX test/cpp_headers/jsonrpc.o 00:02:28.003 CXX test/cpp_headers/json.o 00:02:28.003 CXX test/cpp_headers/keyring.o 00:02:28.003 CXX test/cpp_headers/likely.o 00:02:28.003 CXX test/cpp_headers/log.o 00:02:28.003 CXX test/cpp_headers/keyring_module.o 00:02:28.003 CXX test/cpp_headers/md5.o 00:02:28.003 CXX test/cpp_headers/lvol.o 00:02:28.003 CXX test/cpp_headers/memory.o 00:02:28.003 CXX test/cpp_headers/net.o 00:02:28.003 CXX test/cpp_headers/mmio.o 00:02:28.003 CXX test/cpp_headers/nbd.o 00:02:28.003 CXX test/cpp_headers/nvme.o 00:02:28.003 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.003 CXX test/cpp_headers/notify.o 00:02:28.003 CXX test/cpp_headers/nvme_intel.o 00:02:28.003 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.003 CXX test/cpp_headers/nvme_spec.o 00:02:28.003 CXX test/cpp_headers/nvme_zns.o 00:02:28.003 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.003 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.003 CXX test/cpp_headers/nvmf_transport.o 00:02:28.003 CXX test/cpp_headers/opal.o 00:02:28.003 CXX test/cpp_headers/nvmf.o 00:02:28.003 CXX test/cpp_headers/nvmf_spec.o 00:02:28.003 CXX test/cpp_headers/opal_spec.o 00:02:28.003 CC examples/ioat/verify/verify.o 00:02:28.003 CXX test/cpp_headers/pci_ids.o 00:02:28.003 CXX test/cpp_headers/pipe.o 00:02:28.003 CXX test/cpp_headers/reduce.o 00:02:28.271 CXX test/cpp_headers/queue.o 00:02:28.271 CXX test/cpp_headers/rpc.o 00:02:28.271 CC test/thread/poller_perf/poller_perf.o 00:02:28.271 CXX test/cpp_headers/scsi.o 00:02:28.271 LINK spdk_lspci 00:02:28.271 CXX test/cpp_headers/scsi_spec.o 00:02:28.271 CXX test/cpp_headers/scheduler.o 00:02:28.271 CXX test/cpp_headers/string.o 00:02:28.271 CC test/env/vtophys/vtophys.o 00:02:28.271 CC test/app/histogram_perf/histogram_perf.o 00:02:28.271 CXX test/cpp_headers/sock.o 00:02:28.271 CXX test/cpp_headers/stdinc.o 00:02:28.271 CXX test/cpp_headers/thread.o 00:02:28.271 CXX test/cpp_headers/trace_parser.o 00:02:28.271 CXX test/cpp_headers/trace.o 00:02:28.271 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:28.271 CC examples/ioat/perf/perf.o 00:02:28.271 CXX test/cpp_headers/ublk.o 00:02:28.271 CXX test/cpp_headers/tree.o 00:02:28.271 CXX test/cpp_headers/util.o 00:02:28.271 CC examples/util/zipf/zipf.o 00:02:28.272 CXX test/cpp_headers/uuid.o 00:02:28.272 CC test/env/memory/memory_ut.o 00:02:28.272 CXX test/cpp_headers/version.o 00:02:28.272 CXX test/cpp_headers/vfio_user_pci.o 00:02:28.272 CXX test/cpp_headers/vfio_user_spec.o 00:02:28.272 CXX test/cpp_headers/vhost.o 00:02:28.272 CXX test/cpp_headers/vmd.o 00:02:28.272 CXX test/cpp_headers/xor.o 00:02:28.272 CXX test/cpp_headers/zipf.o 00:02:28.272 CC test/app/jsoncat/jsoncat.o 00:02:28.272 CC app/fio/nvme/fio_plugin.o 00:02:28.272 CC test/app/stub/stub.o 00:02:28.272 CC test/env/pci/pci_ut.o 00:02:28.272 CC test/dma/test_dma/test_dma.o 00:02:28.272 CC app/fio/bdev/fio_plugin.o 00:02:28.272 LINK rpc_client_test 00:02:28.272 CC test/app/bdev_svc/bdev_svc.o 00:02:28.272 LINK nvmf_tgt 00:02:28.537 LINK iscsi_tgt 00:02:28.537 LINK spdk_nvme_discover 00:02:28.537 LINK interrupt_tgt 00:02:28.537 LINK spdk_trace_record 00:02:28.799 LINK spdk_tgt 00:02:28.799 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.799 LINK histogram_perf 00:02:28.799 LINK spdk_dd 00:02:28.799 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.799 LINK spdk_trace 00:02:28.799 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.799 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.799 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.060 LINK env_dpdk_post_init 00:02:29.060 LINK jsoncat 00:02:29.060 LINK vtophys 00:02:29.060 LINK zipf 00:02:29.320 LINK poller_perf 00:02:29.320 LINK bdev_svc 00:02:29.320 LINK ioat_perf 00:02:29.320 LINK stub 00:02:29.320 LINK verify 00:02:29.320 CC app/vhost/vhost.o 00:02:29.582 LINK nvme_fuzz 00:02:29.582 LINK test_dma 00:02:29.582 LINK vhost 00:02:29.582 LINK pci_ut 00:02:29.582 LINK vhost_fuzz 00:02:29.582 LINK mem_callbacks 00:02:29.582 LINK spdk_bdev 00:02:29.843 CC examples/vmd/lsvmd/lsvmd.o 00:02:29.843 CC examples/vmd/led/led.o 00:02:29.843 LINK spdk_top 00:02:29.843 CC examples/idxd/perf/perf.o 00:02:29.843 CC examples/sock/hello_world/hello_sock.o 00:02:29.843 LINK spdk_nvme 00:02:29.843 CC test/event/event_perf/event_perf.o 00:02:29.843 CC test/event/reactor_perf/reactor_perf.o 00:02:29.843 CC examples/thread/thread/thread_ex.o 00:02:29.843 CC test/event/reactor/reactor.o 00:02:29.843 LINK spdk_nvme_perf 00:02:29.843 CC test/event/app_repeat/app_repeat.o 00:02:29.843 CC test/event/scheduler/scheduler.o 00:02:29.843 LINK lsvmd 00:02:29.843 LINK led 00:02:29.843 LINK spdk_nvme_identify 00:02:29.843 LINK event_perf 00:02:29.843 LINK reactor 00:02:29.843 LINK reactor_perf 00:02:30.105 LINK app_repeat 00:02:30.105 LINK memory_ut 00:02:30.105 LINK hello_sock 00:02:30.105 LINK thread 00:02:30.105 LINK scheduler 00:02:30.105 LINK idxd_perf 00:02:30.105 CC test/nvme/aer/aer.o 00:02:30.365 CC test/nvme/reset/reset.o 00:02:30.365 CC test/nvme/boot_partition/boot_partition.o 00:02:30.365 CC test/nvme/connect_stress/connect_stress.o 00:02:30.365 CC test/nvme/e2edp/nvme_dp.o 00:02:30.365 CC test/nvme/overhead/overhead.o 00:02:30.365 CC test/nvme/sgl/sgl.o 00:02:30.365 CC test/nvme/simple_copy/simple_copy.o 00:02:30.365 CC test/nvme/reserve/reserve.o 00:02:30.365 CC test/nvme/cuse/cuse.o 00:02:30.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.365 CC test/nvme/err_injection/err_injection.o 00:02:30.365 CC test/nvme/fdp/fdp.o 00:02:30.365 CC test/nvme/compliance/nvme_compliance.o 00:02:30.365 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.365 CC test/nvme/startup/startup.o 00:02:30.365 CC test/accel/dif/dif.o 00:02:30.365 CC test/blobfs/mkfs/mkfs.o 00:02:30.365 CC test/lvol/esnap/esnap.o 00:02:30.365 LINK boot_partition 00:02:30.628 LINK connect_stress 00:02:30.628 LINK startup 00:02:30.628 LINK doorbell_aers 00:02:30.628 LINK err_injection 00:02:30.628 LINK fused_ordering 00:02:30.628 LINK reserve 00:02:30.628 LINK reset 00:02:30.628 LINK aer 00:02:30.628 LINK mkfs 00:02:30.628 LINK simple_copy 00:02:30.628 LINK sgl 00:02:30.628 LINK nvme_dp 00:02:30.628 CC examples/nvme/arbitration/arbitration.o 00:02:30.628 CC examples/nvme/hello_world/hello_world.o 00:02:30.628 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:30.628 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:30.628 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:30.628 CC examples/nvme/hotplug/hotplug.o 00:02:30.628 CC examples/nvme/reconnect/reconnect.o 00:02:30.628 LINK overhead 00:02:30.628 CC examples/nvme/abort/abort.o 00:02:30.628 LINK fdp 00:02:30.628 LINK nvme_compliance 00:02:30.628 CC examples/accel/perf/accel_perf.o 00:02:30.628 CC examples/blob/cli/blobcli.o 00:02:30.890 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:30.890 CC examples/blob/hello_world/hello_blob.o 00:02:30.890 LINK cmb_copy 00:02:30.890 LINK pmr_persistence 00:02:30.890 LINK hello_world 00:02:30.890 LINK hotplug 00:02:30.890 LINK arbitration 00:02:31.151 LINK hello_blob 00:02:31.151 LINK reconnect 00:02:31.151 LINK abort 00:02:31.151 LINK iscsi_fuzz 00:02:31.151 LINK hello_fsdev 00:02:31.151 LINK dif 00:02:31.151 LINK nvme_manage 00:02:31.412 LINK blobcli 00:02:31.412 LINK accel_perf 00:02:31.673 LINK cuse 00:02:31.673 CC test/bdev/bdevio/bdevio.o 00:02:31.934 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.934 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.194 LINK hello_bdev 00:02:32.194 LINK bdevio 00:02:32.765 LINK bdevperf 00:02:33.708 CC examples/nvmf/nvmf/nvmf.o 00:02:33.969 LINK nvmf 00:02:35.883 LINK esnap 00:02:35.883 00:02:35.883 real 1m0.332s 00:02:35.883 user 8m33.302s 00:02:35.883 sys 5m17.545s 00:02:35.883 04:55:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:35.883 04:55:49 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.883 ************************************ 00:02:35.883 END TEST make 00:02:35.883 ************************************ 00:02:35.883 04:55:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.883 04:55:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.883 04:55:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.883 04:55:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.883 04:55:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.883 04:55:49 -- pm/common@44 -- $ pid=1204653 00:02:35.883 04:55:49 -- pm/common@50 -- $ kill -TERM 1204653 00:02:35.883 04:55:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.883 04:55:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.883 04:55:49 -- pm/common@44 -- $ pid=1204654 00:02:35.883 04:55:49 -- pm/common@50 -- $ kill -TERM 1204654 00:02:35.883 04:55:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.883 04:55:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.883 04:55:49 -- pm/common@44 -- $ pid=1204656 00:02:35.883 04:55:49 -- pm/common@50 -- $ kill -TERM 1204656 00:02:35.883 04:55:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.883 04:55:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.883 04:55:49 -- pm/common@44 -- $ pid=1204682 00:02:35.883 04:55:49 -- pm/common@50 -- $ sudo -E kill -TERM 1204682 00:02:35.883 04:55:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:35.883 04:55:49 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:36.145 04:55:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:36.145 04:55:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:36.145 04:55:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:36.145 04:55:50 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:36.145 04:55:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:36.145 04:55:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:36.145 04:55:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:36.145 04:55:50 -- scripts/common.sh@336 -- # IFS=.-: 00:02:36.145 04:55:50 -- scripts/common.sh@336 -- # read -ra ver1 00:02:36.145 04:55:50 -- scripts/common.sh@337 -- # IFS=.-: 00:02:36.145 04:55:50 -- scripts/common.sh@337 -- # read -ra ver2 00:02:36.145 04:55:50 -- scripts/common.sh@338 -- # local 'op=<' 00:02:36.145 04:55:50 -- scripts/common.sh@340 -- # ver1_l=2 00:02:36.145 04:55:50 -- scripts/common.sh@341 -- # ver2_l=1 00:02:36.145 04:55:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:36.145 04:55:50 -- scripts/common.sh@344 -- # case "$op" in 00:02:36.145 04:55:50 -- scripts/common.sh@345 -- # : 1 00:02:36.145 04:55:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:36.145 04:55:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.145 04:55:50 -- scripts/common.sh@365 -- # decimal 1 00:02:36.145 04:55:50 -- scripts/common.sh@353 -- # local d=1 00:02:36.145 04:55:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:36.145 04:55:50 -- scripts/common.sh@355 -- # echo 1 00:02:36.145 04:55:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:36.146 04:55:50 -- scripts/common.sh@366 -- # decimal 2 00:02:36.146 04:55:50 -- scripts/common.sh@353 -- # local d=2 00:02:36.146 04:55:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:36.146 04:55:50 -- scripts/common.sh@355 -- # echo 2 00:02:36.146 04:55:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:36.146 04:55:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:36.146 04:55:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:36.146 04:55:50 -- scripts/common.sh@368 -- # return 0 00:02:36.146 04:55:50 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:36.146 04:55:50 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.146 --rc genhtml_branch_coverage=1 00:02:36.146 --rc genhtml_function_coverage=1 00:02:36.146 --rc genhtml_legend=1 00:02:36.146 --rc geninfo_all_blocks=1 00:02:36.146 --rc geninfo_unexecuted_blocks=1 00:02:36.146 00:02:36.146 ' 00:02:36.146 04:55:50 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.146 --rc genhtml_branch_coverage=1 00:02:36.146 --rc genhtml_function_coverage=1 00:02:36.146 --rc genhtml_legend=1 00:02:36.146 --rc geninfo_all_blocks=1 00:02:36.146 --rc geninfo_unexecuted_blocks=1 00:02:36.146 00:02:36.146 ' 00:02:36.146 04:55:50 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.146 --rc genhtml_branch_coverage=1 00:02:36.146 --rc genhtml_function_coverage=1 00:02:36.146 --rc genhtml_legend=1 00:02:36.146 --rc geninfo_all_blocks=1 00:02:36.146 --rc geninfo_unexecuted_blocks=1 00:02:36.146 00:02:36.146 ' 00:02:36.146 04:55:50 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.146 --rc genhtml_branch_coverage=1 00:02:36.146 --rc genhtml_function_coverage=1 00:02:36.146 --rc genhtml_legend=1 00:02:36.146 --rc geninfo_all_blocks=1 00:02:36.146 --rc geninfo_unexecuted_blocks=1 00:02:36.146 00:02:36.146 ' 00:02:36.146 04:55:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:36.146 04:55:50 -- nvmf/common.sh@7 -- # uname -s 00:02:36.146 04:55:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:36.146 04:55:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:36.146 04:55:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:36.146 04:55:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:36.146 04:55:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:36.146 04:55:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:36.146 04:55:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:36.146 04:55:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:36.146 04:55:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:36.146 04:55:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:36.146 04:55:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:36.146 04:55:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:36.146 04:55:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:36.146 04:55:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:36.146 04:55:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:36.146 04:55:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:36.146 04:55:50 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:36.146 04:55:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:36.146 04:55:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:36.146 04:55:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.146 04:55:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.146 04:55:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.146 04:55:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.146 04:55:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.146 04:55:50 -- paths/export.sh@5 -- # export PATH 00:02:36.146 04:55:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.146 04:55:50 -- nvmf/common.sh@51 -- # : 0 00:02:36.146 04:55:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:36.146 04:55:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:36.146 04:55:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:36.146 04:55:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:36.146 04:55:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:36.146 04:55:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:36.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:36.146 04:55:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:36.146 04:55:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:36.146 04:55:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:36.146 04:55:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:36.146 04:55:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:36.146 04:55:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:36.146 04:55:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:36.146 04:55:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.146 04:55:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:36.146 04:55:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.146 04:55:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:36.146 04:55:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:36.146 04:55:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:36.146 04:55:50 -- spdk/autotest.sh@48 -- # udevadm_pid=1270448 00:02:36.146 04:55:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:36.146 04:55:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:36.146 04:55:50 -- pm/common@17 -- # local monitor 00:02:36.146 04:55:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.146 04:55:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.146 04:55:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.146 04:55:50 -- pm/common@21 -- # date +%s 00:02:36.146 04:55:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.146 04:55:50 -- pm/common@21 -- # date +%s 00:02:36.146 04:55:50 -- pm/common@25 -- # sleep 1 00:02:36.146 04:55:50 -- pm/common@21 -- # date +%s 00:02:36.146 04:55:50 -- pm/common@21 -- # date +%s 00:02:36.146 04:55:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716550 00:02:36.146 04:55:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716550 00:02:36.146 04:55:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716550 00:02:36.146 04:55:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716550 00:02:36.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716550_collect-vmstat.pm.log 00:02:36.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716550_collect-cpu-load.pm.log 00:02:36.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716550_collect-cpu-temp.pm.log 00:02:36.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716550_collect-bmc-pm.bmc.pm.log 00:02:37.349 04:55:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:37.349 04:55:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:37.349 04:55:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:37.349 04:55:51 -- common/autotest_common.sh@10 -- # set +x 00:02:37.349 04:55:51 -- spdk/autotest.sh@59 -- # create_test_list 00:02:37.349 04:55:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:37.349 04:55:51 -- common/autotest_common.sh@10 -- # set +x 00:02:37.349 04:55:51 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:37.349 04:55:51 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.349 04:55:51 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.349 04:55:51 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.349 04:55:51 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.349 04:55:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:37.349 04:55:51 -- common/autotest_common.sh@1457 -- # uname 00:02:37.349 04:55:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:37.349 04:55:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:37.349 04:55:51 -- common/autotest_common.sh@1477 -- # uname 00:02:37.349 04:55:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:37.349 04:55:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:37.349 04:55:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:37.349 lcov: LCOV version 1.15 00:02:37.349 04:55:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:52.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:52.258 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.207 04:56:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:07.207 04:56:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:07.207 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:03:07.207 04:56:21 -- spdk/autotest.sh@78 -- # rm -f 00:03:07.207 04:56:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.416 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:11.416 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.416 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.416 04:56:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:11.416 04:56:25 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:11.416 04:56:25 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:11.416 04:56:25 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:11.416 04:56:25 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:11.416 04:56:25 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:11.416 04:56:25 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:11.416 04:56:25 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:11.416 04:56:25 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:11.416 04:56:25 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:11.416 04:56:25 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:11.416 04:56:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.416 04:56:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:11.416 04:56:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:11.416 04:56:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.416 04:56:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:11.416 04:56:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:11.416 04:56:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:11.416 04:56:25 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.416 No valid GPT data, bailing 00:03:11.416 04:56:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.416 04:56:25 -- scripts/common.sh@394 -- # pt= 00:03:11.416 04:56:25 -- scripts/common.sh@395 -- # return 1 00:03:11.416 04:56:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.416 1+0 records in 00:03:11.416 1+0 records out 00:03:11.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00206735 s, 507 MB/s 00:03:11.416 04:56:25 -- spdk/autotest.sh@105 -- # sync 00:03:11.416 04:56:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.416 04:56:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.416 04:56:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.423 04:56:33 -- spdk/autotest.sh@111 -- # uname -s 00:03:21.423 04:56:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:21.423 04:56:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:21.423 04:56:33 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:23.336 Hugepages 00:03:23.336 node hugesize free / total 00:03:23.336 node0 1048576kB 0 / 0 00:03:23.336 node0 2048kB 1024 / 1024 00:03:23.336 node1 1048576kB 0 / 0 00:03:23.336 node1 2048kB 1024 / 1024 00:03:23.336 00:03:23.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.336 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:23.336 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:23.597 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:23.597 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:23.597 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:23.597 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:23.597 04:56:37 -- spdk/autotest.sh@117 -- # uname -s 00:03:23.597 04:56:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:23.597 04:56:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:23.597 04:56:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.808 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:27.808 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:27.809 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:29.193 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:29.454 04:56:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:30.397 04:56:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:30.397 04:56:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:30.397 04:56:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:30.397 04:56:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:30.397 04:56:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:30.397 04:56:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:30.397 04:56:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:30.397 04:56:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:30.397 04:56:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:30.397 04:56:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:30.397 04:56:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:30.397 04:56:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.608 Waiting for block devices as requested 00:03:34.608 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:34.608 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:34.869 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:34.869 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:35.130 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:35.130 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:35.130 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:35.130 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:35.391 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:35.391 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:35.651 04:56:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:35.651 04:56:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:35.651 04:56:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:35.651 04:56:49 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:35.651 04:56:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:35.652 04:56:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:35.652 04:56:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:35.652 04:56:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:35.652 04:56:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:35.652 04:56:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:35.652 04:56:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:35.652 04:56:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:35.652 04:56:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:35.912 04:56:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:35.912 04:56:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:35.912 04:56:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:35.912 04:56:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:35.912 04:56:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:35.912 04:56:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:35.912 04:56:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:35.912 04:56:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:35.912 04:56:49 -- common/autotest_common.sh@1543 -- # continue 00:03:35.912 04:56:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:35.912 04:56:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:35.912 04:56:49 -- common/autotest_common.sh@10 -- # set +x 00:03:35.912 04:56:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:35.912 04:56:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.912 04:56:49 -- common/autotest_common.sh@10 -- # set +x 00:03:35.912 04:56:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.208 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:39.208 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:39.472 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.057 04:56:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:40.057 04:56:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.057 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:03:40.057 04:56:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:40.057 04:56:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:40.057 04:56:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:40.057 04:56:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:40.057 04:56:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:40.057 04:56:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:40.057 04:56:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:40.057 04:56:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:40.057 04:56:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:40.057 04:56:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:40.057 04:56:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.057 04:56:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:40.057 04:56:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:40.057 04:56:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:40.057 04:56:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:40.057 04:56:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.057 04:56:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:40.057 04:56:53 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:40.057 04:56:53 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:40.057 04:56:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:40.057 04:56:53 -- common/autotest_common.sh@1572 -- # return 0 00:03:40.057 04:56:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:40.057 04:56:53 -- common/autotest_common.sh@1580 -- # return 0 00:03:40.057 04:56:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.057 04:56:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.057 04:56:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.057 04:56:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.057 04:56:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.057 04:56:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.057 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:03:40.057 04:56:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.057 04:56:53 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.057 04:56:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.057 04:56:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.057 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:03:40.057 ************************************ 00:03:40.057 START TEST env 00:03:40.057 ************************************ 00:03:40.057 04:56:53 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.318 * Looking for test storage... 00:03:40.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:40.318 04:56:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.318 04:56:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.318 04:56:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.318 04:56:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.318 04:56:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.318 04:56:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.318 04:56:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.318 04:56:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.318 04:56:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.318 04:56:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.318 04:56:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.318 04:56:54 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.318 04:56:54 env -- scripts/common.sh@345 -- # : 1 00:03:40.318 04:56:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.318 04:56:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.318 04:56:54 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.318 04:56:54 env -- scripts/common.sh@353 -- # local d=1 00:03:40.318 04:56:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.318 04:56:54 env -- scripts/common.sh@355 -- # echo 1 00:03:40.318 04:56:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.318 04:56:54 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.318 04:56:54 env -- scripts/common.sh@353 -- # local d=2 00:03:40.318 04:56:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.318 04:56:54 env -- scripts/common.sh@355 -- # echo 2 00:03:40.318 04:56:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.318 04:56:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.318 04:56:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.318 04:56:54 env -- scripts/common.sh@368 -- # return 0 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.318 --rc genhtml_branch_coverage=1 00:03:40.318 --rc genhtml_function_coverage=1 00:03:40.318 --rc genhtml_legend=1 00:03:40.318 --rc geninfo_all_blocks=1 00:03:40.318 --rc geninfo_unexecuted_blocks=1 00:03:40.318 00:03:40.318 ' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.318 --rc genhtml_branch_coverage=1 00:03:40.318 --rc genhtml_function_coverage=1 00:03:40.318 --rc genhtml_legend=1 00:03:40.318 --rc geninfo_all_blocks=1 00:03:40.318 --rc geninfo_unexecuted_blocks=1 00:03:40.318 00:03:40.318 ' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.318 --rc genhtml_branch_coverage=1 00:03:40.318 --rc genhtml_function_coverage=1 00:03:40.318 --rc genhtml_legend=1 00:03:40.318 --rc geninfo_all_blocks=1 00:03:40.318 --rc geninfo_unexecuted_blocks=1 00:03:40.318 00:03:40.318 ' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.318 --rc genhtml_branch_coverage=1 00:03:40.318 --rc genhtml_function_coverage=1 00:03:40.318 --rc genhtml_legend=1 00:03:40.318 --rc geninfo_all_blocks=1 00:03:40.318 --rc geninfo_unexecuted_blocks=1 00:03:40.318 00:03:40.318 ' 00:03:40.318 04:56:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.318 04:56:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.318 04:56:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.318 ************************************ 00:03:40.318 START TEST env_memory 00:03:40.318 ************************************ 00:03:40.318 04:56:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.318 00:03:40.318 00:03:40.318 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.318 http://cunit.sourceforge.net/ 00:03:40.318 00:03:40.318 00:03:40.318 Suite: memory 00:03:40.318 Test: alloc and free memory map ...[2024-12-09 04:56:54.295731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.579 passed 00:03:40.579 Test: mem map translation ...[2024-12-09 04:56:54.337635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:40.579 [2024-12-09 04:56:54.337676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:40.579 [2024-12-09 04:56:54.337744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:40.579 [2024-12-09 04:56:54.337763] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:40.579 passed 00:03:40.579 Test: mem map registration ...[2024-12-09 04:56:54.411374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:40.579 [2024-12-09 04:56:54.411413] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:40.579 passed 00:03:40.579 Test: mem map adjacent registrations ...passed 00:03:40.579 00:03:40.579 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.579 suites 1 1 n/a 0 0 00:03:40.579 tests 4 4 4 0 0 00:03:40.579 asserts 152 152 152 0 n/a 00:03:40.580 00:03:40.580 Elapsed time = 0.259 seconds 00:03:40.580 00:03:40.580 real 0m0.297s 00:03:40.580 user 0m0.270s 00:03:40.580 sys 0m0.027s 00:03:40.580 04:56:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.580 04:56:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:40.580 ************************************ 00:03:40.580 END TEST env_memory 00:03:40.580 ************************************ 00:03:40.580 04:56:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:40.580 04:56:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.580 04:56:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.580 04:56:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.842 ************************************ 00:03:40.842 START TEST env_vtophys 00:03:40.842 ************************************ 00:03:40.842 04:56:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:40.842 EAL: lib.eal log level changed from notice to debug 00:03:40.842 EAL: Detected lcore 0 as core 0 on socket 0 00:03:40.842 EAL: Detected lcore 1 as core 1 on socket 0 00:03:40.842 EAL: Detected lcore 2 as core 2 on socket 0 00:03:40.842 EAL: Detected lcore 3 as core 3 on socket 0 00:03:40.842 EAL: Detected lcore 4 as core 4 on socket 0 00:03:40.842 EAL: Detected lcore 5 as core 5 on socket 0 00:03:40.842 EAL: Detected lcore 6 as core 6 on socket 0 00:03:40.842 EAL: Detected lcore 7 as core 7 on socket 0 00:03:40.842 EAL: Detected lcore 8 as core 8 on socket 0 00:03:40.842 EAL: Detected lcore 9 as core 9 on socket 0 00:03:40.842 EAL: Detected lcore 10 as core 10 on socket 0 00:03:40.842 EAL: Detected lcore 11 as core 11 on socket 0 00:03:40.842 EAL: Detected lcore 12 as core 12 on socket 0 00:03:40.842 EAL: Detected lcore 13 as core 13 on socket 0 00:03:40.842 EAL: Detected lcore 14 as core 14 on socket 0 00:03:40.842 EAL: Detected lcore 15 as core 15 on socket 0 00:03:40.842 EAL: Detected lcore 16 as core 16 on socket 0 00:03:40.842 EAL: Detected lcore 17 as core 17 on socket 0 00:03:40.842 EAL: Detected lcore 18 as core 18 on socket 0 00:03:40.842 EAL: Detected lcore 19 as core 19 on socket 0 00:03:40.842 EAL: Detected lcore 20 as core 20 on socket 0 00:03:40.842 EAL: Detected lcore 21 as core 21 on socket 0 00:03:40.842 EAL: Detected lcore 22 as core 22 on socket 0 00:03:40.842 EAL: Detected lcore 23 as core 23 on socket 0 00:03:40.842 EAL: Detected lcore 24 as core 24 on socket 0 00:03:40.842 EAL: Detected lcore 25 as core 25 on socket 0 00:03:40.842 EAL: Detected lcore 26 as core 26 on socket 0 00:03:40.842 EAL: Detected lcore 27 as core 27 on socket 0 00:03:40.842 EAL: Detected lcore 28 as core 28 on socket 0 00:03:40.842 EAL: Detected lcore 29 as core 29 on socket 0 00:03:40.843 EAL: Detected lcore 30 as core 30 on socket 0 00:03:40.843 EAL: Detected lcore 31 as core 31 on socket 0 00:03:40.843 EAL: Detected lcore 32 as core 32 on socket 0 00:03:40.843 EAL: Detected lcore 33 as core 33 on socket 0 00:03:40.843 EAL: Detected lcore 34 as core 34 on socket 0 00:03:40.843 EAL: Detected lcore 35 as core 35 on socket 0 00:03:40.843 EAL: Detected lcore 36 as core 0 on socket 1 00:03:40.843 EAL: Detected lcore 37 as core 1 on socket 1 00:03:40.843 EAL: Detected lcore 38 as core 2 on socket 1 00:03:40.843 EAL: Detected lcore 39 as core 3 on socket 1 00:03:40.843 EAL: Detected lcore 40 as core 4 on socket 1 00:03:40.843 EAL: Detected lcore 41 as core 5 on socket 1 00:03:40.843 EAL: Detected lcore 42 as core 6 on socket 1 00:03:40.843 EAL: Detected lcore 43 as core 7 on socket 1 00:03:40.843 EAL: Detected lcore 44 as core 8 on socket 1 00:03:40.843 EAL: Detected lcore 45 as core 9 on socket 1 00:03:40.843 EAL: Detected lcore 46 as core 10 on socket 1 00:03:40.843 EAL: Detected lcore 47 as core 11 on socket 1 00:03:40.843 EAL: Detected lcore 48 as core 12 on socket 1 00:03:40.843 EAL: Detected lcore 49 as core 13 on socket 1 00:03:40.843 EAL: Detected lcore 50 as core 14 on socket 1 00:03:40.843 EAL: Detected lcore 51 as core 15 on socket 1 00:03:40.843 EAL: Detected lcore 52 as core 16 on socket 1 00:03:40.843 EAL: Detected lcore 53 as core 17 on socket 1 00:03:40.843 EAL: Detected lcore 54 as core 18 on socket 1 00:03:40.843 EAL: Detected lcore 55 as core 19 on socket 1 00:03:40.843 EAL: Detected lcore 56 as core 20 on socket 1 00:03:40.843 EAL: Detected lcore 57 as core 21 on socket 1 00:03:40.843 EAL: Detected lcore 58 as core 22 on socket 1 00:03:40.843 EAL: Detected lcore 59 as core 23 on socket 1 00:03:40.843 EAL: Detected lcore 60 as core 24 on socket 1 00:03:40.843 EAL: Detected lcore 61 as core 25 on socket 1 00:03:40.843 EAL: Detected lcore 62 as core 26 on socket 1 00:03:40.843 EAL: Detected lcore 63 as core 27 on socket 1 00:03:40.843 EAL: Detected lcore 64 as core 28 on socket 1 00:03:40.843 EAL: Detected lcore 65 as core 29 on socket 1 00:03:40.843 EAL: Detected lcore 66 as core 30 on socket 1 00:03:40.843 EAL: Detected lcore 67 as core 31 on socket 1 00:03:40.843 EAL: Detected lcore 68 as core 32 on socket 1 00:03:40.843 EAL: Detected lcore 69 as core 33 on socket 1 00:03:40.843 EAL: Detected lcore 70 as core 34 on socket 1 00:03:40.843 EAL: Detected lcore 71 as core 35 on socket 1 00:03:40.843 EAL: Detected lcore 72 as core 0 on socket 0 00:03:40.843 EAL: Detected lcore 73 as core 1 on socket 0 00:03:40.843 EAL: Detected lcore 74 as core 2 on socket 0 00:03:40.843 EAL: Detected lcore 75 as core 3 on socket 0 00:03:40.843 EAL: Detected lcore 76 as core 4 on socket 0 00:03:40.843 EAL: Detected lcore 77 as core 5 on socket 0 00:03:40.843 EAL: Detected lcore 78 as core 6 on socket 0 00:03:40.843 EAL: Detected lcore 79 as core 7 on socket 0 00:03:40.843 EAL: Detected lcore 80 as core 8 on socket 0 00:03:40.843 EAL: Detected lcore 81 as core 9 on socket 0 00:03:40.843 EAL: Detected lcore 82 as core 10 on socket 0 00:03:40.843 EAL: Detected lcore 83 as core 11 on socket 0 00:03:40.843 EAL: Detected lcore 84 as core 12 on socket 0 00:03:40.843 EAL: Detected lcore 85 as core 13 on socket 0 00:03:40.843 EAL: Detected lcore 86 as core 14 on socket 0 00:03:40.843 EAL: Detected lcore 87 as core 15 on socket 0 00:03:40.843 EAL: Detected lcore 88 as core 16 on socket 0 00:03:40.843 EAL: Detected lcore 89 as core 17 on socket 0 00:03:40.843 EAL: Detected lcore 90 as core 18 on socket 0 00:03:40.843 EAL: Detected lcore 91 as core 19 on socket 0 00:03:40.843 EAL: Detected lcore 92 as core 20 on socket 0 00:03:40.843 EAL: Detected lcore 93 as core 21 on socket 0 00:03:40.843 EAL: Detected lcore 94 as core 22 on socket 0 00:03:40.843 EAL: Detected lcore 95 as core 23 on socket 0 00:03:40.843 EAL: Detected lcore 96 as core 24 on socket 0 00:03:40.843 EAL: Detected lcore 97 as core 25 on socket 0 00:03:40.843 EAL: Detected lcore 98 as core 26 on socket 0 00:03:40.843 EAL: Detected lcore 99 as core 27 on socket 0 00:03:40.843 EAL: Detected lcore 100 as core 28 on socket 0 00:03:40.843 EAL: Detected lcore 101 as core 29 on socket 0 00:03:40.843 EAL: Detected lcore 102 as core 30 on socket 0 00:03:40.843 EAL: Detected lcore 103 as core 31 on socket 0 00:03:40.843 EAL: Detected lcore 104 as core 32 on socket 0 00:03:40.843 EAL: Detected lcore 105 as core 33 on socket 0 00:03:40.843 EAL: Detected lcore 106 as core 34 on socket 0 00:03:40.843 EAL: Detected lcore 107 as core 35 on socket 0 00:03:40.843 EAL: Detected lcore 108 as core 0 on socket 1 00:03:40.843 EAL: Detected lcore 109 as core 1 on socket 1 00:03:40.843 EAL: Detected lcore 110 as core 2 on socket 1 00:03:40.843 EAL: Detected lcore 111 as core 3 on socket 1 00:03:40.843 EAL: Detected lcore 112 as core 4 on socket 1 00:03:40.843 EAL: Detected lcore 113 as core 5 on socket 1 00:03:40.843 EAL: Detected lcore 114 as core 6 on socket 1 00:03:40.843 EAL: Detected lcore 115 as core 7 on socket 1 00:03:40.843 EAL: Detected lcore 116 as core 8 on socket 1 00:03:40.843 EAL: Detected lcore 117 as core 9 on socket 1 00:03:40.843 EAL: Detected lcore 118 as core 10 on socket 1 00:03:40.843 EAL: Detected lcore 119 as core 11 on socket 1 00:03:40.843 EAL: Detected lcore 120 as core 12 on socket 1 00:03:40.843 EAL: Detected lcore 121 as core 13 on socket 1 00:03:40.843 EAL: Detected lcore 122 as core 14 on socket 1 00:03:40.843 EAL: Detected lcore 123 as core 15 on socket 1 00:03:40.843 EAL: Detected lcore 124 as core 16 on socket 1 00:03:40.843 EAL: Detected lcore 125 as core 17 on socket 1 00:03:40.843 EAL: Detected lcore 126 as core 18 on socket 1 00:03:40.843 EAL: Detected lcore 127 as core 19 on socket 1 00:03:40.843 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:40.843 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:40.843 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:40.843 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:40.843 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:40.843 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:40.843 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:40.843 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:40.843 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:40.843 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:40.843 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:40.843 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:40.843 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:40.843 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:40.843 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:40.843 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:40.843 EAL: Maximum logical cores by configuration: 128 00:03:40.843 EAL: Detected CPU lcores: 128 00:03:40.843 EAL: Detected NUMA nodes: 2 00:03:40.843 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:40.843 EAL: Detected shared linkage of DPDK 00:03:40.843 EAL: No shared files mode enabled, IPC will be disabled 00:03:40.843 EAL: Bus pci wants IOVA as 'DC' 00:03:40.843 EAL: Buses did not request a specific IOVA mode. 00:03:40.843 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:40.843 EAL: Selected IOVA mode 'VA' 00:03:40.843 EAL: Probing VFIO support... 00:03:40.843 EAL: IOMMU type 1 (Type 1) is supported 00:03:40.843 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:40.843 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:40.843 EAL: VFIO support initialized 00:03:40.843 EAL: Ask a virtual area of 0x2e000 bytes 00:03:40.843 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:40.843 EAL: Setting up physically contiguous memory... 00:03:40.843 EAL: Setting maximum number of open files to 524288 00:03:40.843 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:40.843 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:40.843 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:40.843 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.843 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:40.843 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:40.843 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.843 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:40.843 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:40.843 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.844 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:40.844 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:40.844 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.844 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:40.844 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:40.844 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.844 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:40.844 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:40.844 EAL: Hugepages will be freed exactly as allocated. 00:03:40.844 EAL: No shared files mode enabled, IPC is disabled 00:03:40.844 EAL: No shared files mode enabled, IPC is disabled 00:03:40.844 EAL: TSC frequency is ~2400000 KHz 00:03:40.844 EAL: Main lcore 0 is ready (tid=7fa164e3fa40;cpuset=[0]) 00:03:40.844 EAL: Trying to obtain current memory policy. 00:03:40.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.844 EAL: Restoring previous memory policy: 0 00:03:40.844 EAL: request: mp_malloc_sync 00:03:40.844 EAL: No shared files mode enabled, IPC is disabled 00:03:40.844 EAL: Heap on socket 0 was expanded by 2MB 00:03:40.844 EAL: No shared files mode enabled, IPC is disabled 00:03:40.844 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:40.844 EAL: Mem event callback 'spdk:(nil)' registered 00:03:40.844 00:03:40.844 00:03:40.844 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.844 http://cunit.sourceforge.net/ 00:03:40.844 00:03:40.844 00:03:40.844 Suite: components_suite 00:03:41.420 Test: vtophys_malloc_test ...passed 00:03:41.420 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.420 EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.420 EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.420 EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.420 EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.420 EAL: Trying to obtain current memory policy. 00:03:41.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.420 EAL: Restoring previous memory policy: 4 00:03:41.420 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.420 EAL: request: mp_malloc_sync 00:03:41.420 EAL: No shared files mode enabled, IPC is disabled 00:03:41.420 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.685 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.685 EAL: request: mp_malloc_sync 00:03:41.685 EAL: No shared files mode enabled, IPC is disabled 00:03:41.685 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.685 EAL: Trying to obtain current memory policy. 00:03:41.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.685 EAL: Restoring previous memory policy: 4 00:03:41.685 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.685 EAL: request: mp_malloc_sync 00:03:41.685 EAL: No shared files mode enabled, IPC is disabled 00:03:41.685 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.946 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.946 EAL: request: mp_malloc_sync 00:03:41.946 EAL: No shared files mode enabled, IPC is disabled 00:03:41.946 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.946 EAL: Trying to obtain current memory policy. 00:03:41.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.207 EAL: Restoring previous memory policy: 4 00:03:42.207 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.207 EAL: request: mp_malloc_sync 00:03:42.207 EAL: No shared files mode enabled, IPC is disabled 00:03:42.207 EAL: Heap on socket 0 was expanded by 258MB 00:03:42.468 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.468 EAL: request: mp_malloc_sync 00:03:42.468 EAL: No shared files mode enabled, IPC is disabled 00:03:42.468 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.729 EAL: Trying to obtain current memory policy. 00:03:42.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.729 EAL: Restoring previous memory policy: 4 00:03:42.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.729 EAL: request: mp_malloc_sync 00:03:42.729 EAL: No shared files mode enabled, IPC is disabled 00:03:42.729 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.670 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.670 EAL: request: mp_malloc_sync 00:03:43.670 EAL: No shared files mode enabled, IPC is disabled 00:03:43.670 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.241 EAL: Trying to obtain current memory policy. 00:03:44.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.241 EAL: Restoring previous memory policy: 4 00:03:44.241 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.241 EAL: request: mp_malloc_sync 00:03:44.241 EAL: No shared files mode enabled, IPC is disabled 00:03:44.241 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.624 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.624 EAL: request: mp_malloc_sync 00:03:45.624 EAL: No shared files mode enabled, IPC is disabled 00:03:45.624 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:46.194 passed 00:03:46.194 00:03:46.194 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.194 suites 1 1 n/a 0 0 00:03:46.194 tests 2 2 2 0 0 00:03:46.194 asserts 497 497 497 0 n/a 00:03:46.194 00:03:46.194 Elapsed time = 5.231 seconds 00:03:46.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.194 EAL: request: mp_malloc_sync 00:03:46.194 EAL: No shared files mode enabled, IPC is disabled 00:03:46.194 EAL: Heap on socket 0 was shrunk by 2MB 00:03:46.194 EAL: No shared files mode enabled, IPC is disabled 00:03:46.194 EAL: No shared files mode enabled, IPC is disabled 00:03:46.194 EAL: No shared files mode enabled, IPC is disabled 00:03:46.194 00:03:46.194 real 0m5.512s 00:03:46.194 user 0m4.539s 00:03:46.194 sys 0m0.923s 00:03:46.194 04:57:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.194 04:57:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:46.194 ************************************ 00:03:46.194 END TEST env_vtophys 00:03:46.194 ************************************ 00:03:46.194 04:57:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:46.194 04:57:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.194 04:57:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.194 04:57:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.455 ************************************ 00:03:46.455 START TEST env_pci 00:03:46.455 ************************************ 00:03:46.455 04:57:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:46.455 00:03:46.455 00:03:46.455 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.455 http://cunit.sourceforge.net/ 00:03:46.455 00:03:46.455 00:03:46.455 Suite: pci 00:03:46.455 Test: pci_hook ...[2024-12-09 04:57:00.239541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1290473 has claimed it 00:03:46.455 EAL: Cannot find device (10000:00:01.0) 00:03:46.455 EAL: Failed to attach device on primary process 00:03:46.455 passed 00:03:46.455 00:03:46.455 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.455 suites 1 1 n/a 0 0 00:03:46.455 tests 1 1 1 0 0 00:03:46.455 asserts 25 25 25 0 n/a 00:03:46.455 00:03:46.455 Elapsed time = 0.052 seconds 00:03:46.455 00:03:46.455 real 0m0.135s 00:03:46.455 user 0m0.051s 00:03:46.455 sys 0m0.084s 00:03:46.455 04:57:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.455 04:57:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:46.455 ************************************ 00:03:46.455 END TEST env_pci 00:03:46.455 ************************************ 00:03:46.455 04:57:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:46.455 04:57:00 env -- env/env.sh@15 -- # uname 00:03:46.455 04:57:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:46.455 04:57:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:46.455 04:57:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.455 04:57:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:46.455 04:57:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.455 04:57:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.455 ************************************ 00:03:46.455 START TEST env_dpdk_post_init 00:03:46.455 ************************************ 00:03:46.455 04:57:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.715 EAL: Detected CPU lcores: 128 00:03:46.715 EAL: Detected NUMA nodes: 2 00:03:46.715 EAL: Detected shared linkage of DPDK 00:03:46.715 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.715 EAL: Selected IOVA mode 'VA' 00:03:46.715 EAL: VFIO support initialized 00:03:46.715 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.715 EAL: Using IOMMU type 1 (Type 1) 00:03:46.977 EAL: Ignore mapping IO port bar(1) 00:03:46.977 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:47.237 EAL: Ignore mapping IO port bar(1) 00:03:47.237 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:47.237 EAL: Ignore mapping IO port bar(1) 00:03:47.497 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:47.497 EAL: Ignore mapping IO port bar(1) 00:03:47.756 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:47.756 EAL: Ignore mapping IO port bar(1) 00:03:48.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:48.017 EAL: Ignore mapping IO port bar(1) 00:03:48.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:48.277 EAL: Ignore mapping IO port bar(1) 00:03:48.277 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:48.537 EAL: Ignore mapping IO port bar(1) 00:03:48.537 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:48.797 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:48.797 EAL: Ignore mapping IO port bar(1) 00:03:49.058 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:49.058 EAL: Ignore mapping IO port bar(1) 00:03:49.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:49.319 EAL: Ignore mapping IO port bar(1) 00:03:49.580 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:49.580 EAL: Ignore mapping IO port bar(1) 00:03:49.580 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:49.840 EAL: Ignore mapping IO port bar(1) 00:03:49.840 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:50.102 EAL: Ignore mapping IO port bar(1) 00:03:50.102 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:50.363 EAL: Ignore mapping IO port bar(1) 00:03:50.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:50.624 EAL: Ignore mapping IO port bar(1) 00:03:50.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:50.624 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:50.624 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:50.624 Starting DPDK initialization... 00:03:50.624 Starting SPDK post initialization... 00:03:50.624 SPDK NVMe probe 00:03:50.624 Attaching to 0000:65:00.0 00:03:50.624 Attached to 0000:65:00.0 00:03:50.624 Cleaning up... 00:03:52.541 00:03:52.541 real 0m5.875s 00:03:52.541 user 0m0.148s 00:03:52.541 sys 0m0.287s 00:03:52.541 04:57:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.541 04:57:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.541 ************************************ 00:03:52.541 END TEST env_dpdk_post_init 00:03:52.541 ************************************ 00:03:52.541 04:57:06 env -- env/env.sh@26 -- # uname 00:03:52.541 04:57:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:52.541 04:57:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:52.541 04:57:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.541 04:57:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.541 04:57:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.541 ************************************ 00:03:52.541 START TEST env_mem_callbacks 00:03:52.541 ************************************ 00:03:52.541 04:57:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:52.541 EAL: Detected CPU lcores: 128 00:03:52.541 EAL: Detected NUMA nodes: 2 00:03:52.541 EAL: Detected shared linkage of DPDK 00:03:52.541 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.541 EAL: Selected IOVA mode 'VA' 00:03:52.541 EAL: VFIO support initialized 00:03:52.541 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.541 00:03:52.541 00:03:52.541 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.541 http://cunit.sourceforge.net/ 00:03:52.541 00:03:52.541 00:03:52.541 Suite: memory 00:03:52.541 Test: test ... 00:03:52.541 register 0x200000200000 2097152 00:03:52.541 malloc 3145728 00:03:52.541 register 0x200000400000 4194304 00:03:52.541 buf 0x2000004fffc0 len 3145728 PASSED 00:03:52.541 malloc 64 00:03:52.541 buf 0x2000004ffec0 len 64 PASSED 00:03:52.541 malloc 4194304 00:03:52.541 register 0x200000800000 6291456 00:03:52.541 buf 0x2000009fffc0 len 4194304 PASSED 00:03:52.541 free 0x2000004fffc0 3145728 00:03:52.541 free 0x2000004ffec0 64 00:03:52.541 unregister 0x200000400000 4194304 PASSED 00:03:52.541 free 0x2000009fffc0 4194304 00:03:52.541 unregister 0x200000800000 6291456 PASSED 00:03:52.802 malloc 8388608 00:03:52.802 register 0x200000400000 10485760 00:03:52.802 buf 0x2000005fffc0 len 8388608 PASSED 00:03:52.802 free 0x2000005fffc0 8388608 00:03:52.802 unregister 0x200000400000 10485760 PASSED 00:03:52.802 passed 00:03:52.802 00:03:52.802 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.802 suites 1 1 n/a 0 0 00:03:52.802 tests 1 1 1 0 0 00:03:52.802 asserts 15 15 15 0 n/a 00:03:52.802 00:03:52.802 Elapsed time = 0.063 seconds 00:03:52.802 00:03:52.802 real 0m0.199s 00:03:52.802 user 0m0.096s 00:03:52.802 sys 0m0.102s 00:03:52.802 04:57:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.802 04:57:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:52.802 ************************************ 00:03:52.802 END TEST env_mem_callbacks 00:03:52.802 ************************************ 00:03:52.802 00:03:52.802 real 0m12.634s 00:03:52.802 user 0m5.372s 00:03:52.802 sys 0m1.805s 00:03:52.802 04:57:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.802 04:57:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.802 ************************************ 00:03:52.802 END TEST env 00:03:52.802 ************************************ 00:03:52.802 04:57:06 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:52.802 04:57:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.802 04:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.802 04:57:06 -- common/autotest_common.sh@10 -- # set +x 00:03:52.802 ************************************ 00:03:52.802 START TEST rpc 00:03:52.802 ************************************ 00:03:52.802 04:57:06 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:53.064 * Looking for test storage... 00:03:53.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.064 04:57:06 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.064 04:57:06 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.064 04:57:06 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.064 04:57:06 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.064 04:57:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.064 04:57:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.064 04:57:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.064 04:57:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.064 04:57:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.064 04:57:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:53.064 04:57:06 rpc -- scripts/common.sh@345 -- # : 1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.064 04:57:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.064 04:57:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@353 -- # local d=1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.064 04:57:06 rpc -- scripts/common.sh@355 -- # echo 1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.064 04:57:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@353 -- # local d=2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.064 04:57:06 rpc -- scripts/common.sh@355 -- # echo 2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.064 04:57:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.064 04:57:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.065 04:57:06 rpc -- scripts/common.sh@368 -- # return 0 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.065 --rc genhtml_branch_coverage=1 00:03:53.065 --rc genhtml_function_coverage=1 00:03:53.065 --rc genhtml_legend=1 00:03:53.065 --rc geninfo_all_blocks=1 00:03:53.065 --rc geninfo_unexecuted_blocks=1 00:03:53.065 00:03:53.065 ' 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.065 --rc genhtml_branch_coverage=1 00:03:53.065 --rc genhtml_function_coverage=1 00:03:53.065 --rc genhtml_legend=1 00:03:53.065 --rc geninfo_all_blocks=1 00:03:53.065 --rc geninfo_unexecuted_blocks=1 00:03:53.065 00:03:53.065 ' 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.065 --rc genhtml_branch_coverage=1 00:03:53.065 --rc genhtml_function_coverage=1 00:03:53.065 --rc genhtml_legend=1 00:03:53.065 --rc geninfo_all_blocks=1 00:03:53.065 --rc geninfo_unexecuted_blocks=1 00:03:53.065 00:03:53.065 ' 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.065 --rc genhtml_branch_coverage=1 00:03:53.065 --rc genhtml_function_coverage=1 00:03:53.065 --rc genhtml_legend=1 00:03:53.065 --rc geninfo_all_blocks=1 00:03:53.065 --rc geninfo_unexecuted_blocks=1 00:03:53.065 00:03:53.065 ' 00:03:53.065 04:57:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1292029 00:03:53.065 04:57:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.065 04:57:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1292029 00:03:53.065 04:57:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 1292029 ']' 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.065 04:57:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.065 [2024-12-09 04:57:07.014854] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:03:53.065 [2024-12-09 04:57:07.014978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292029 ] 00:03:53.325 [2024-12-09 04:57:07.169057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.325 [2024-12-09 04:57:07.294272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:53.325 [2024-12-09 04:57:07.294332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1292029' to capture a snapshot of events at runtime. 00:03:53.325 [2024-12-09 04:57:07.294351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:53.325 [2024-12-09 04:57:07.294362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:53.325 [2024-12-09 04:57:07.294376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1292029 for offline analysis/debug. 00:03:53.326 [2024-12-09 04:57:07.295859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.268 04:57:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.268 04:57:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:54.268 04:57:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.268 04:57:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.268 04:57:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:54.268 04:57:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:54.268 04:57:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.268 04:57:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.268 04:57:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.268 ************************************ 00:03:54.268 START TEST rpc_integrity 00:03:54.268 ************************************ 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.268 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.268 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:54.268 { 00:03:54.268 "name": "Malloc0", 00:03:54.268 "aliases": [ 00:03:54.268 "74643f21-b308-4801-a060-30621d67292f" 00:03:54.268 ], 00:03:54.268 "product_name": "Malloc disk", 00:03:54.268 "block_size": 512, 00:03:54.268 "num_blocks": 16384, 00:03:54.268 "uuid": "74643f21-b308-4801-a060-30621d67292f", 00:03:54.268 "assigned_rate_limits": { 00:03:54.269 "rw_ios_per_sec": 0, 00:03:54.269 "rw_mbytes_per_sec": 0, 00:03:54.269 "r_mbytes_per_sec": 0, 00:03:54.269 "w_mbytes_per_sec": 0 00:03:54.269 }, 00:03:54.269 "claimed": false, 00:03:54.269 "zoned": false, 00:03:54.269 "supported_io_types": { 00:03:54.269 "read": true, 00:03:54.269 "write": true, 00:03:54.269 "unmap": true, 00:03:54.269 "flush": true, 00:03:54.269 "reset": true, 00:03:54.269 "nvme_admin": false, 00:03:54.269 "nvme_io": false, 00:03:54.269 "nvme_io_md": false, 00:03:54.269 "write_zeroes": true, 00:03:54.269 "zcopy": true, 00:03:54.269 "get_zone_info": false, 00:03:54.269 "zone_management": false, 00:03:54.269 "zone_append": false, 00:03:54.269 "compare": false, 00:03:54.269 "compare_and_write": false, 00:03:54.269 "abort": true, 00:03:54.269 "seek_hole": false, 00:03:54.269 "seek_data": false, 00:03:54.269 "copy": true, 00:03:54.269 "nvme_iov_md": false 00:03:54.269 }, 00:03:54.269 "memory_domains": [ 00:03:54.269 { 00:03:54.269 "dma_device_id": "system", 00:03:54.269 "dma_device_type": 1 00:03:54.269 }, 00:03:54.269 { 00:03:54.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.269 "dma_device_type": 2 00:03:54.269 } 00:03:54.269 ], 00:03:54.269 "driver_specific": {} 00:03:54.269 } 00:03:54.269 ]' 00:03:54.269 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:54.269 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:54.269 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:54.269 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.269 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.269 [2024-12-09 04:57:08.245094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:54.269 [2024-12-09 04:57:08.245164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:54.269 [2024-12-09 04:57:08.245193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001e680 00:03:54.269 [2024-12-09 04:57:08.245205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:54.269 [2024-12-09 04:57:08.247723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:54.269 [2024-12-09 04:57:08.247771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:54.269 Passthru0 00:03:54.269 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.269 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:54.269 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.269 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.530 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.530 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:54.530 { 00:03:54.530 "name": "Malloc0", 00:03:54.530 "aliases": [ 00:03:54.530 "74643f21-b308-4801-a060-30621d67292f" 00:03:54.530 ], 00:03:54.530 "product_name": "Malloc disk", 00:03:54.530 "block_size": 512, 00:03:54.530 "num_blocks": 16384, 00:03:54.530 "uuid": "74643f21-b308-4801-a060-30621d67292f", 00:03:54.530 "assigned_rate_limits": { 00:03:54.530 "rw_ios_per_sec": 0, 00:03:54.530 "rw_mbytes_per_sec": 0, 00:03:54.530 "r_mbytes_per_sec": 0, 00:03:54.530 "w_mbytes_per_sec": 0 00:03:54.530 }, 00:03:54.530 "claimed": true, 00:03:54.530 "claim_type": "exclusive_write", 00:03:54.530 "zoned": false, 00:03:54.530 "supported_io_types": { 00:03:54.530 "read": true, 00:03:54.530 "write": true, 00:03:54.530 "unmap": true, 00:03:54.530 "flush": true, 00:03:54.530 "reset": true, 00:03:54.530 "nvme_admin": false, 00:03:54.530 "nvme_io": false, 00:03:54.530 "nvme_io_md": false, 00:03:54.530 "write_zeroes": true, 00:03:54.530 "zcopy": true, 00:03:54.530 "get_zone_info": false, 00:03:54.530 "zone_management": false, 00:03:54.530 "zone_append": false, 00:03:54.530 "compare": false, 00:03:54.530 "compare_and_write": false, 00:03:54.530 "abort": true, 00:03:54.530 "seek_hole": false, 00:03:54.530 "seek_data": false, 00:03:54.530 "copy": true, 00:03:54.530 "nvme_iov_md": false 00:03:54.530 }, 00:03:54.530 "memory_domains": [ 00:03:54.530 { 00:03:54.530 "dma_device_id": "system", 00:03:54.530 "dma_device_type": 1 00:03:54.530 }, 00:03:54.530 { 00:03:54.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.530 "dma_device_type": 2 00:03:54.530 } 00:03:54.530 ], 00:03:54.530 "driver_specific": {} 00:03:54.530 }, 00:03:54.530 { 00:03:54.530 "name": "Passthru0", 00:03:54.530 "aliases": [ 00:03:54.530 "58a46282-dab5-51d0-8753-19b782cf26c4" 00:03:54.530 ], 00:03:54.530 "product_name": "passthru", 00:03:54.530 "block_size": 512, 00:03:54.530 "num_blocks": 16384, 00:03:54.530 "uuid": "58a46282-dab5-51d0-8753-19b782cf26c4", 00:03:54.530 "assigned_rate_limits": { 00:03:54.530 "rw_ios_per_sec": 0, 00:03:54.530 "rw_mbytes_per_sec": 0, 00:03:54.530 "r_mbytes_per_sec": 0, 00:03:54.531 "w_mbytes_per_sec": 0 00:03:54.531 }, 00:03:54.531 "claimed": false, 00:03:54.531 "zoned": false, 00:03:54.531 "supported_io_types": { 00:03:54.531 "read": true, 00:03:54.531 "write": true, 00:03:54.531 "unmap": true, 00:03:54.531 "flush": true, 00:03:54.531 "reset": true, 00:03:54.531 "nvme_admin": false, 00:03:54.531 "nvme_io": false, 00:03:54.531 "nvme_io_md": false, 00:03:54.531 "write_zeroes": true, 00:03:54.531 "zcopy": true, 00:03:54.531 "get_zone_info": false, 00:03:54.531 "zone_management": false, 00:03:54.531 "zone_append": false, 00:03:54.531 "compare": false, 00:03:54.531 "compare_and_write": false, 00:03:54.531 "abort": true, 00:03:54.531 "seek_hole": false, 00:03:54.531 "seek_data": false, 00:03:54.531 "copy": true, 00:03:54.531 "nvme_iov_md": false 00:03:54.531 }, 00:03:54.531 "memory_domains": [ 00:03:54.531 { 00:03:54.531 "dma_device_id": "system", 00:03:54.531 "dma_device_type": 1 00:03:54.531 }, 00:03:54.531 { 00:03:54.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.531 "dma_device_type": 2 00:03:54.531 } 00:03:54.531 ], 00:03:54.531 "driver_specific": { 00:03:54.531 "passthru": { 00:03:54.531 "name": "Passthru0", 00:03:54.531 "base_bdev_name": "Malloc0" 00:03:54.531 } 00:03:54.531 } 00:03:54.531 } 00:03:54.531 ]' 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:54.531 04:57:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:54.531 00:03:54.531 real 0m0.321s 00:03:54.531 user 0m0.193s 00:03:54.531 sys 0m0.033s 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 ************************************ 00:03:54.531 END TEST rpc_integrity 00:03:54.531 ************************************ 00:03:54.531 04:57:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:54.531 04:57:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.531 04:57:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.531 04:57:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 ************************************ 00:03:54.531 START TEST rpc_plugins 00:03:54.531 ************************************ 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:54.531 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.531 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:54.531 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.531 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:54.792 { 00:03:54.792 "name": "Malloc1", 00:03:54.792 "aliases": [ 00:03:54.792 "72e3d107-04bf-4565-a918-8876822cc670" 00:03:54.792 ], 00:03:54.792 "product_name": "Malloc disk", 00:03:54.792 "block_size": 4096, 00:03:54.792 "num_blocks": 256, 00:03:54.792 "uuid": "72e3d107-04bf-4565-a918-8876822cc670", 00:03:54.792 "assigned_rate_limits": { 00:03:54.792 "rw_ios_per_sec": 0, 00:03:54.792 "rw_mbytes_per_sec": 0, 00:03:54.792 "r_mbytes_per_sec": 0, 00:03:54.792 "w_mbytes_per_sec": 0 00:03:54.792 }, 00:03:54.792 "claimed": false, 00:03:54.792 "zoned": false, 00:03:54.792 "supported_io_types": { 00:03:54.792 "read": true, 00:03:54.792 "write": true, 00:03:54.792 "unmap": true, 00:03:54.792 "flush": true, 00:03:54.792 "reset": true, 00:03:54.792 "nvme_admin": false, 00:03:54.792 "nvme_io": false, 00:03:54.792 "nvme_io_md": false, 00:03:54.792 "write_zeroes": true, 00:03:54.792 "zcopy": true, 00:03:54.792 "get_zone_info": false, 00:03:54.792 "zone_management": false, 00:03:54.792 "zone_append": false, 00:03:54.792 "compare": false, 00:03:54.792 "compare_and_write": false, 00:03:54.792 "abort": true, 00:03:54.792 "seek_hole": false, 00:03:54.792 "seek_data": false, 00:03:54.792 "copy": true, 00:03:54.792 "nvme_iov_md": false 00:03:54.792 }, 00:03:54.792 "memory_domains": [ 00:03:54.792 { 00:03:54.792 "dma_device_id": "system", 00:03:54.792 "dma_device_type": 1 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.792 "dma_device_type": 2 00:03:54.792 } 00:03:54.792 ], 00:03:54.792 "driver_specific": {} 00:03:54.792 } 00:03:54.792 ]' 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:54.792 04:57:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:54.792 00:03:54.792 real 0m0.153s 00:03:54.792 user 0m0.088s 00:03:54.792 sys 0m0.027s 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.792 04:57:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 ************************************ 00:03:54.792 END TEST rpc_plugins 00:03:54.792 ************************************ 00:03:54.792 04:57:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:54.792 04:57:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.792 04:57:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.792 04:57:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 ************************************ 00:03:54.792 START TEST rpc_trace_cmd_test 00:03:54.792 ************************************ 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:54.792 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1292029", 00:03:54.792 "tpoint_group_mask": "0x8", 00:03:54.792 "iscsi_conn": { 00:03:54.792 "mask": "0x2", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "scsi": { 00:03:54.792 "mask": "0x4", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "bdev": { 00:03:54.792 "mask": "0x8", 00:03:54.792 "tpoint_mask": "0xffffffffffffffff" 00:03:54.792 }, 00:03:54.792 "nvmf_rdma": { 00:03:54.792 "mask": "0x10", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "nvmf_tcp": { 00:03:54.792 "mask": "0x20", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "ftl": { 00:03:54.792 "mask": "0x40", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "blobfs": { 00:03:54.792 "mask": "0x80", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "dsa": { 00:03:54.792 "mask": "0x200", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "thread": { 00:03:54.792 "mask": "0x400", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "nvme_pcie": { 00:03:54.792 "mask": "0x800", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "iaa": { 00:03:54.792 "mask": "0x1000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "nvme_tcp": { 00:03:54.792 "mask": "0x2000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "bdev_nvme": { 00:03:54.792 "mask": "0x4000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "sock": { 00:03:54.792 "mask": "0x8000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "blob": { 00:03:54.792 "mask": "0x10000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "bdev_raid": { 00:03:54.792 "mask": "0x20000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 }, 00:03:54.792 "scheduler": { 00:03:54.792 "mask": "0x40000", 00:03:54.792 "tpoint_mask": "0x0" 00:03:54.792 } 00:03:54.792 }' 00:03:54.792 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:55.054 00:03:55.054 real 0m0.252s 00:03:55.054 user 0m0.217s 00:03:55.054 sys 0m0.024s 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.054 04:57:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.054 ************************************ 00:03:55.054 END TEST rpc_trace_cmd_test 00:03:55.054 ************************************ 00:03:55.054 04:57:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:55.054 04:57:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:55.054 04:57:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:55.054 04:57:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.054 04:57:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.054 04:57:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.315 ************************************ 00:03:55.315 START TEST rpc_daemon_integrity 00:03:55.315 ************************************ 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.315 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.315 { 00:03:55.315 "name": "Malloc2", 00:03:55.315 "aliases": [ 00:03:55.315 "b1d75603-bd88-42f6-8e20-d18fc5d4307a" 00:03:55.315 ], 00:03:55.315 "product_name": "Malloc disk", 00:03:55.315 "block_size": 512, 00:03:55.315 "num_blocks": 16384, 00:03:55.315 "uuid": "b1d75603-bd88-42f6-8e20-d18fc5d4307a", 00:03:55.315 "assigned_rate_limits": { 00:03:55.315 "rw_ios_per_sec": 0, 00:03:55.315 "rw_mbytes_per_sec": 0, 00:03:55.315 "r_mbytes_per_sec": 0, 00:03:55.315 "w_mbytes_per_sec": 0 00:03:55.315 }, 00:03:55.315 "claimed": false, 00:03:55.315 "zoned": false, 00:03:55.315 "supported_io_types": { 00:03:55.315 "read": true, 00:03:55.315 "write": true, 00:03:55.315 "unmap": true, 00:03:55.315 "flush": true, 00:03:55.315 "reset": true, 00:03:55.315 "nvme_admin": false, 00:03:55.315 "nvme_io": false, 00:03:55.315 "nvme_io_md": false, 00:03:55.315 "write_zeroes": true, 00:03:55.315 "zcopy": true, 00:03:55.315 "get_zone_info": false, 00:03:55.315 "zone_management": false, 00:03:55.315 "zone_append": false, 00:03:55.315 "compare": false, 00:03:55.315 "compare_and_write": false, 00:03:55.315 "abort": true, 00:03:55.315 "seek_hole": false, 00:03:55.315 "seek_data": false, 00:03:55.315 "copy": true, 00:03:55.315 "nvme_iov_md": false 00:03:55.315 }, 00:03:55.315 "memory_domains": [ 00:03:55.315 { 00:03:55.315 "dma_device_id": "system", 00:03:55.315 "dma_device_type": 1 00:03:55.315 }, 00:03:55.315 { 00:03:55.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.315 "dma_device_type": 2 00:03:55.315 } 00:03:55.315 ], 00:03:55.315 "driver_specific": {} 00:03:55.315 } 00:03:55.315 ]' 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.316 [2024-12-09 04:57:09.215935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:55.316 [2024-12-09 04:57:09.216003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.316 [2024-12-09 04:57:09.216031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001f880 00:03:55.316 [2024-12-09 04:57:09.216043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.316 [2024-12-09 04:57:09.218589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.316 [2024-12-09 04:57:09.218636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.316 Passthru0 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.316 { 00:03:55.316 "name": "Malloc2", 00:03:55.316 "aliases": [ 00:03:55.316 "b1d75603-bd88-42f6-8e20-d18fc5d4307a" 00:03:55.316 ], 00:03:55.316 "product_name": "Malloc disk", 00:03:55.316 "block_size": 512, 00:03:55.316 "num_blocks": 16384, 00:03:55.316 "uuid": "b1d75603-bd88-42f6-8e20-d18fc5d4307a", 00:03:55.316 "assigned_rate_limits": { 00:03:55.316 "rw_ios_per_sec": 0, 00:03:55.316 "rw_mbytes_per_sec": 0, 00:03:55.316 "r_mbytes_per_sec": 0, 00:03:55.316 "w_mbytes_per_sec": 0 00:03:55.316 }, 00:03:55.316 "claimed": true, 00:03:55.316 "claim_type": "exclusive_write", 00:03:55.316 "zoned": false, 00:03:55.316 "supported_io_types": { 00:03:55.316 "read": true, 00:03:55.316 "write": true, 00:03:55.316 "unmap": true, 00:03:55.316 "flush": true, 00:03:55.316 "reset": true, 00:03:55.316 "nvme_admin": false, 00:03:55.316 "nvme_io": false, 00:03:55.316 "nvme_io_md": false, 00:03:55.316 "write_zeroes": true, 00:03:55.316 "zcopy": true, 00:03:55.316 "get_zone_info": false, 00:03:55.316 "zone_management": false, 00:03:55.316 "zone_append": false, 00:03:55.316 "compare": false, 00:03:55.316 "compare_and_write": false, 00:03:55.316 "abort": true, 00:03:55.316 "seek_hole": false, 00:03:55.316 "seek_data": false, 00:03:55.316 "copy": true, 00:03:55.316 "nvme_iov_md": false 00:03:55.316 }, 00:03:55.316 "memory_domains": [ 00:03:55.316 { 00:03:55.316 "dma_device_id": "system", 00:03:55.316 "dma_device_type": 1 00:03:55.316 }, 00:03:55.316 { 00:03:55.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.316 "dma_device_type": 2 00:03:55.316 } 00:03:55.316 ], 00:03:55.316 "driver_specific": {} 00:03:55.316 }, 00:03:55.316 { 00:03:55.316 "name": "Passthru0", 00:03:55.316 "aliases": [ 00:03:55.316 "a5193f86-2f89-5ea4-8029-34619cf0b129" 00:03:55.316 ], 00:03:55.316 "product_name": "passthru", 00:03:55.316 "block_size": 512, 00:03:55.316 "num_blocks": 16384, 00:03:55.316 "uuid": "a5193f86-2f89-5ea4-8029-34619cf0b129", 00:03:55.316 "assigned_rate_limits": { 00:03:55.316 "rw_ios_per_sec": 0, 00:03:55.316 "rw_mbytes_per_sec": 0, 00:03:55.316 "r_mbytes_per_sec": 0, 00:03:55.316 "w_mbytes_per_sec": 0 00:03:55.316 }, 00:03:55.316 "claimed": false, 00:03:55.316 "zoned": false, 00:03:55.316 "supported_io_types": { 00:03:55.316 "read": true, 00:03:55.316 "write": true, 00:03:55.316 "unmap": true, 00:03:55.316 "flush": true, 00:03:55.316 "reset": true, 00:03:55.316 "nvme_admin": false, 00:03:55.316 "nvme_io": false, 00:03:55.316 "nvme_io_md": false, 00:03:55.316 "write_zeroes": true, 00:03:55.316 "zcopy": true, 00:03:55.316 "get_zone_info": false, 00:03:55.316 "zone_management": false, 00:03:55.316 "zone_append": false, 00:03:55.316 "compare": false, 00:03:55.316 "compare_and_write": false, 00:03:55.316 "abort": true, 00:03:55.316 "seek_hole": false, 00:03:55.316 "seek_data": false, 00:03:55.316 "copy": true, 00:03:55.316 "nvme_iov_md": false 00:03:55.316 }, 00:03:55.316 "memory_domains": [ 00:03:55.316 { 00:03:55.316 "dma_device_id": "system", 00:03:55.316 "dma_device_type": 1 00:03:55.316 }, 00:03:55.316 { 00:03:55.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.316 "dma_device_type": 2 00:03:55.316 } 00:03:55.316 ], 00:03:55.316 "driver_specific": { 00:03:55.316 "passthru": { 00:03:55.316 "name": "Passthru0", 00:03:55.316 "base_bdev_name": "Malloc2" 00:03:55.316 } 00:03:55.316 } 00:03:55.316 } 00:03:55.316 ]' 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.316 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.577 00:03:55.577 real 0m0.317s 00:03:55.577 user 0m0.180s 00:03:55.577 sys 0m0.046s 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.577 04:57:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.577 ************************************ 00:03:55.577 END TEST rpc_daemon_integrity 00:03:55.577 ************************************ 00:03:55.577 04:57:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:55.577 04:57:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1292029 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 1292029 ']' 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@958 -- # kill -0 1292029 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@959 -- # uname 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1292029 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1292029' 00:03:55.577 killing process with pid 1292029 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@973 -- # kill 1292029 00:03:55.577 04:57:09 rpc -- common/autotest_common.sh@978 -- # wait 1292029 00:03:57.497 00:03:57.497 real 0m4.508s 00:03:57.497 user 0m5.046s 00:03:57.497 sys 0m1.013s 00:03:57.497 04:57:11 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.497 04:57:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.497 ************************************ 00:03:57.497 END TEST rpc 00:03:57.497 ************************************ 00:03:57.497 04:57:11 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.497 04:57:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.497 04:57:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.497 04:57:11 -- common/autotest_common.sh@10 -- # set +x 00:03:57.497 ************************************ 00:03:57.497 START TEST skip_rpc 00:03:57.497 ************************************ 00:03:57.497 04:57:11 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.497 * Looking for test storage... 00:03:57.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.497 04:57:11 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.497 04:57:11 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.497 04:57:11 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.497 04:57:11 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.497 04:57:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.759 04:57:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.759 --rc genhtml_branch_coverage=1 00:03:57.759 --rc genhtml_function_coverage=1 00:03:57.759 --rc genhtml_legend=1 00:03:57.759 --rc geninfo_all_blocks=1 00:03:57.759 --rc geninfo_unexecuted_blocks=1 00:03:57.759 00:03:57.759 ' 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.759 --rc genhtml_branch_coverage=1 00:03:57.759 --rc genhtml_function_coverage=1 00:03:57.759 --rc genhtml_legend=1 00:03:57.759 --rc geninfo_all_blocks=1 00:03:57.759 --rc geninfo_unexecuted_blocks=1 00:03:57.759 00:03:57.759 ' 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.759 --rc genhtml_branch_coverage=1 00:03:57.759 --rc genhtml_function_coverage=1 00:03:57.759 --rc genhtml_legend=1 00:03:57.759 --rc geninfo_all_blocks=1 00:03:57.759 --rc geninfo_unexecuted_blocks=1 00:03:57.759 00:03:57.759 ' 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.759 --rc genhtml_branch_coverage=1 00:03:57.759 --rc genhtml_function_coverage=1 00:03:57.759 --rc genhtml_legend=1 00:03:57.759 --rc geninfo_all_blocks=1 00:03:57.759 --rc geninfo_unexecuted_blocks=1 00:03:57.759 00:03:57.759 ' 00:03:57.759 04:57:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.759 04:57:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:57.759 04:57:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.759 04:57:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.759 ************************************ 00:03:57.759 START TEST skip_rpc 00:03:57.759 ************************************ 00:03:57.759 04:57:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:57.759 04:57:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1293661 00:03:57.759 04:57:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.759 04:57:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:57.759 04:57:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:57.759 [2024-12-09 04:57:11.644679] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:03:57.759 [2024-12-09 04:57:11.644804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293661 ] 00:03:58.020 [2024-12-09 04:57:11.787370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.020 [2024-12-09 04:57:11.868345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1293661 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1293661 ']' 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1293661 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1293661 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1293661' 00:04:03.311 killing process with pid 1293661 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1293661 00:04:03.311 04:57:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1293661 00:04:03.881 00:04:03.881 real 0m6.232s 00:04:03.881 user 0m5.915s 00:04:03.881 sys 0m0.365s 00:04:03.881 04:57:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.881 04:57:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.881 ************************************ 00:04:03.881 END TEST skip_rpc 00:04:03.881 ************************************ 00:04:03.881 04:57:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:03.881 04:57:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.881 04:57:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.881 04:57:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.881 ************************************ 00:04:03.881 START TEST skip_rpc_with_json 00:04:03.881 ************************************ 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1294878 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1294878 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1294878 ']' 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.881 04:57:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.142 [2024-12-09 04:57:17.952143] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:04.142 [2024-12-09 04:57:17.952275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294878 ] 00:04:04.142 [2024-12-09 04:57:18.098571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.402 [2024-12-09 04:57:18.180099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.976 [2024-12-09 04:57:18.707866] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:04.976 request: 00:04:04.976 { 00:04:04.976 "trtype": "tcp", 00:04:04.976 "method": "nvmf_get_transports", 00:04:04.976 "req_id": 1 00:04:04.976 } 00:04:04.976 Got JSON-RPC error response 00:04:04.976 response: 00:04:04.976 { 00:04:04.976 "code": -19, 00:04:04.976 "message": "No such device" 00:04:04.976 } 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.976 [2024-12-09 04:57:18.719967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.976 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.976 { 00:04:04.976 "subsystems": [ 00:04:04.976 { 00:04:04.976 "subsystem": "fsdev", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "fsdev_set_opts", 00:04:04.976 "params": { 00:04:04.976 "fsdev_io_pool_size": 65535, 00:04:04.976 "fsdev_io_cache_size": 256 00:04:04.976 } 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "keyring", 00:04:04.976 "config": [] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "iobuf", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "iobuf_set_options", 00:04:04.976 "params": { 00:04:04.976 "small_pool_count": 8192, 00:04:04.976 "large_pool_count": 1024, 00:04:04.976 "small_bufsize": 8192, 00:04:04.976 "large_bufsize": 135168, 00:04:04.976 "enable_numa": false 00:04:04.976 } 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "sock", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "sock_set_default_impl", 00:04:04.976 "params": { 00:04:04.976 "impl_name": "posix" 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "sock_impl_set_options", 00:04:04.976 "params": { 00:04:04.976 "impl_name": "ssl", 00:04:04.976 "recv_buf_size": 4096, 00:04:04.976 "send_buf_size": 4096, 00:04:04.976 "enable_recv_pipe": true, 00:04:04.976 "enable_quickack": false, 00:04:04.976 "enable_placement_id": 0, 00:04:04.976 "enable_zerocopy_send_server": true, 00:04:04.976 "enable_zerocopy_send_client": false, 00:04:04.976 "zerocopy_threshold": 0, 00:04:04.976 "tls_version": 0, 00:04:04.976 "enable_ktls": false 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "sock_impl_set_options", 00:04:04.976 "params": { 00:04:04.976 "impl_name": "posix", 00:04:04.976 "recv_buf_size": 2097152, 00:04:04.976 "send_buf_size": 2097152, 00:04:04.976 "enable_recv_pipe": true, 00:04:04.976 "enable_quickack": false, 00:04:04.976 "enable_placement_id": 0, 00:04:04.976 "enable_zerocopy_send_server": true, 00:04:04.976 "enable_zerocopy_send_client": false, 00:04:04.976 "zerocopy_threshold": 0, 00:04:04.976 "tls_version": 0, 00:04:04.976 "enable_ktls": false 00:04:04.976 } 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "vmd", 00:04:04.976 "config": [] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "accel", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "accel_set_options", 00:04:04.976 "params": { 00:04:04.976 "small_cache_size": 128, 00:04:04.976 "large_cache_size": 16, 00:04:04.976 "task_count": 2048, 00:04:04.976 "sequence_count": 2048, 00:04:04.976 "buf_count": 2048 00:04:04.976 } 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "bdev", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "bdev_set_options", 00:04:04.976 "params": { 00:04:04.976 "bdev_io_pool_size": 65535, 00:04:04.976 "bdev_io_cache_size": 256, 00:04:04.976 "bdev_auto_examine": true, 00:04:04.976 "iobuf_small_cache_size": 128, 00:04:04.976 "iobuf_large_cache_size": 16 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "bdev_raid_set_options", 00:04:04.976 "params": { 00:04:04.976 "process_window_size_kb": 1024, 00:04:04.976 "process_max_bandwidth_mb_sec": 0 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "bdev_iscsi_set_options", 00:04:04.976 "params": { 00:04:04.976 "timeout_sec": 30 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "bdev_nvme_set_options", 00:04:04.976 "params": { 00:04:04.976 "action_on_timeout": "none", 00:04:04.976 "timeout_us": 0, 00:04:04.976 "timeout_admin_us": 0, 00:04:04.976 "keep_alive_timeout_ms": 10000, 00:04:04.976 "arbitration_burst": 0, 00:04:04.976 "low_priority_weight": 0, 00:04:04.976 "medium_priority_weight": 0, 00:04:04.976 "high_priority_weight": 0, 00:04:04.976 "nvme_adminq_poll_period_us": 10000, 00:04:04.976 "nvme_ioq_poll_period_us": 0, 00:04:04.976 "io_queue_requests": 0, 00:04:04.976 "delay_cmd_submit": true, 00:04:04.976 "transport_retry_count": 4, 00:04:04.976 "bdev_retry_count": 3, 00:04:04.976 "transport_ack_timeout": 0, 00:04:04.976 "ctrlr_loss_timeout_sec": 0, 00:04:04.976 "reconnect_delay_sec": 0, 00:04:04.976 "fast_io_fail_timeout_sec": 0, 00:04:04.976 "disable_auto_failback": false, 00:04:04.976 "generate_uuids": false, 00:04:04.976 "transport_tos": 0, 00:04:04.976 "nvme_error_stat": false, 00:04:04.976 "rdma_srq_size": 0, 00:04:04.976 "io_path_stat": false, 00:04:04.976 "allow_accel_sequence": false, 00:04:04.976 "rdma_max_cq_size": 0, 00:04:04.976 "rdma_cm_event_timeout_ms": 0, 00:04:04.976 "dhchap_digests": [ 00:04:04.976 "sha256", 00:04:04.976 "sha384", 00:04:04.976 "sha512" 00:04:04.976 ], 00:04:04.976 "dhchap_dhgroups": [ 00:04:04.976 "null", 00:04:04.976 "ffdhe2048", 00:04:04.976 "ffdhe3072", 00:04:04.976 "ffdhe4096", 00:04:04.976 "ffdhe6144", 00:04:04.976 "ffdhe8192" 00:04:04.976 ] 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "bdev_nvme_set_hotplug", 00:04:04.976 "params": { 00:04:04.976 "period_us": 100000, 00:04:04.976 "enable": false 00:04:04.976 } 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "method": "bdev_wait_for_examine" 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "scsi", 00:04:04.976 "config": null 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "scheduler", 00:04:04.976 "config": [ 00:04:04.976 { 00:04:04.976 "method": "framework_set_scheduler", 00:04:04.976 "params": { 00:04:04.976 "name": "static" 00:04:04.976 } 00:04:04.976 } 00:04:04.976 ] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "vhost_scsi", 00:04:04.976 "config": [] 00:04:04.976 }, 00:04:04.976 { 00:04:04.976 "subsystem": "vhost_blk", 00:04:04.976 "config": [] 00:04:04.976 }, 00:04:04.976 { 00:04:04.977 "subsystem": "ublk", 00:04:04.977 "config": [] 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "subsystem": "nbd", 00:04:04.977 "config": [] 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "subsystem": "nvmf", 00:04:04.977 "config": [ 00:04:04.977 { 00:04:04.977 "method": "nvmf_set_config", 00:04:04.977 "params": { 00:04:04.977 "discovery_filter": "match_any", 00:04:04.977 "admin_cmd_passthru": { 00:04:04.977 "identify_ctrlr": false 00:04:04.977 }, 00:04:04.977 "dhchap_digests": [ 00:04:04.977 "sha256", 00:04:04.977 "sha384", 00:04:04.977 "sha512" 00:04:04.977 ], 00:04:04.977 "dhchap_dhgroups": [ 00:04:04.977 "null", 00:04:04.977 "ffdhe2048", 00:04:04.977 "ffdhe3072", 00:04:04.977 "ffdhe4096", 00:04:04.977 "ffdhe6144", 00:04:04.977 "ffdhe8192" 00:04:04.977 ] 00:04:04.977 } 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "method": "nvmf_set_max_subsystems", 00:04:04.977 "params": { 00:04:04.977 "max_subsystems": 1024 00:04:04.977 } 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "method": "nvmf_set_crdt", 00:04:04.977 "params": { 00:04:04.977 "crdt1": 0, 00:04:04.977 "crdt2": 0, 00:04:04.977 "crdt3": 0 00:04:04.977 } 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "method": "nvmf_create_transport", 00:04:04.977 "params": { 00:04:04.977 "trtype": "TCP", 00:04:04.977 "max_queue_depth": 128, 00:04:04.977 "max_io_qpairs_per_ctrlr": 127, 00:04:04.977 "in_capsule_data_size": 4096, 00:04:04.977 "max_io_size": 131072, 00:04:04.977 "io_unit_size": 131072, 00:04:04.977 "max_aq_depth": 128, 00:04:04.977 "num_shared_buffers": 511, 00:04:04.977 "buf_cache_size": 4294967295, 00:04:04.977 "dif_insert_or_strip": false, 00:04:04.977 "zcopy": false, 00:04:04.977 "c2h_success": true, 00:04:04.977 "sock_priority": 0, 00:04:04.977 "abort_timeout_sec": 1, 00:04:04.977 "ack_timeout": 0, 00:04:04.977 "data_wr_pool_size": 0 00:04:04.977 } 00:04:04.977 } 00:04:04.977 ] 00:04:04.977 }, 00:04:04.977 { 00:04:04.977 "subsystem": "iscsi", 00:04:04.977 "config": [ 00:04:04.977 { 00:04:04.977 "method": "iscsi_set_options", 00:04:04.977 "params": { 00:04:04.977 "node_base": "iqn.2016-06.io.spdk", 00:04:04.977 "max_sessions": 128, 00:04:04.977 "max_connections_per_session": 2, 00:04:04.977 "max_queue_depth": 64, 00:04:04.977 "default_time2wait": 2, 00:04:04.977 "default_time2retain": 20, 00:04:04.977 "first_burst_length": 8192, 00:04:04.977 "immediate_data": true, 00:04:04.977 "allow_duplicated_isid": false, 00:04:04.977 "error_recovery_level": 0, 00:04:04.977 "nop_timeout": 60, 00:04:04.977 "nop_in_interval": 30, 00:04:04.977 "disable_chap": false, 00:04:04.977 "require_chap": false, 00:04:04.977 "mutual_chap": false, 00:04:04.977 "chap_group": 0, 00:04:04.977 "max_large_datain_per_connection": 64, 00:04:04.977 "max_r2t_per_connection": 4, 00:04:04.977 "pdu_pool_size": 36864, 00:04:04.977 "immediate_data_pool_size": 16384, 00:04:04.977 "data_out_pool_size": 2048 00:04:04.977 } 00:04:04.977 } 00:04:04.977 ] 00:04:04.977 } 00:04:04.977 ] 00:04:04.977 } 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1294878 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1294878 ']' 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1294878 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1294878 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1294878' 00:04:04.977 killing process with pid 1294878 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1294878 00:04:04.977 04:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1294878 00:04:06.363 04:57:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1295400 00:04:06.363 04:57:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:06.363 04:57:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1295400 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1295400 ']' 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1295400 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295400 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295400' 00:04:11.661 killing process with pid 1295400 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1295400 00:04:11.661 04:57:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1295400 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.663 00:04:12.663 real 0m8.522s 00:04:12.663 user 0m8.198s 00:04:12.663 sys 0m0.794s 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.663 ************************************ 00:04:12.663 END TEST skip_rpc_with_json 00:04:12.663 ************************************ 00:04:12.663 04:57:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.663 04:57:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.663 04:57:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.663 04:57:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.663 ************************************ 00:04:12.663 START TEST skip_rpc_with_delay 00:04:12.663 ************************************ 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.663 [2024-12-09 04:57:26.554282] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.663 00:04:12.663 real 0m0.172s 00:04:12.663 user 0m0.086s 00:04:12.663 sys 0m0.084s 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.663 04:57:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.663 ************************************ 00:04:12.663 END TEST skip_rpc_with_delay 00:04:12.663 ************************************ 00:04:12.924 04:57:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.924 04:57:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.924 04:57:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.924 04:57:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.924 04:57:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.924 04:57:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.924 ************************************ 00:04:12.924 START TEST exit_on_failed_rpc_init 00:04:12.924 ************************************ 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1296787 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1296787 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1296787 ']' 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.924 04:57:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.924 [2024-12-09 04:57:26.809922] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:12.924 [2024-12-09 04:57:26.810054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296787 ] 00:04:13.185 [2024-12-09 04:57:26.955932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.185 [2024-12-09 04:57:27.036630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.757 04:57:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.757 [2024-12-09 04:57:27.654602] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:13.757 [2024-12-09 04:57:27.654713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296814 ] 00:04:14.018 [2024-12-09 04:57:27.799428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.018 [2024-12-09 04:57:27.897689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.018 [2024-12-09 04:57:27.897760] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.018 [2024-12-09 04:57:27.897777] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.018 [2024-12-09 04:57:27.897788] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1296787 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1296787 ']' 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1296787 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1296787 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1296787' 00:04:14.279 killing process with pid 1296787 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1296787 00:04:14.279 04:57:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1296787 00:04:15.665 00:04:15.665 real 0m2.595s 00:04:15.665 user 0m2.940s 00:04:15.665 sys 0m0.554s 00:04:15.665 04:57:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.665 04:57:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.665 ************************************ 00:04:15.665 END TEST exit_on_failed_rpc_init 00:04:15.665 ************************************ 00:04:15.665 04:57:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.665 00:04:15.665 real 0m18.048s 00:04:15.665 user 0m17.371s 00:04:15.665 sys 0m2.121s 00:04:15.665 04:57:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.665 04:57:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.665 ************************************ 00:04:15.665 END TEST skip_rpc 00:04:15.665 ************************************ 00:04:15.665 04:57:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.665 04:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.665 04:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.665 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.665 ************************************ 00:04:15.665 START TEST rpc_client 00:04:15.665 ************************************ 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.665 * Looking for test storage... 00:04:15.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.665 04:57:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.665 --rc genhtml_branch_coverage=1 00:04:15.665 --rc genhtml_function_coverage=1 00:04:15.665 --rc genhtml_legend=1 00:04:15.665 --rc geninfo_all_blocks=1 00:04:15.665 --rc geninfo_unexecuted_blocks=1 00:04:15.665 00:04:15.665 ' 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.665 --rc genhtml_branch_coverage=1 00:04:15.665 --rc genhtml_function_coverage=1 00:04:15.665 --rc genhtml_legend=1 00:04:15.665 --rc geninfo_all_blocks=1 00:04:15.665 --rc geninfo_unexecuted_blocks=1 00:04:15.665 00:04:15.665 ' 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.665 --rc genhtml_branch_coverage=1 00:04:15.665 --rc genhtml_function_coverage=1 00:04:15.665 --rc genhtml_legend=1 00:04:15.665 --rc geninfo_all_blocks=1 00:04:15.665 --rc geninfo_unexecuted_blocks=1 00:04:15.665 00:04:15.665 ' 00:04:15.665 04:57:29 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.665 --rc genhtml_branch_coverage=1 00:04:15.665 --rc genhtml_function_coverage=1 00:04:15.665 --rc genhtml_legend=1 00:04:15.665 --rc geninfo_all_blocks=1 00:04:15.665 --rc geninfo_unexecuted_blocks=1 00:04:15.665 00:04:15.665 ' 00:04:15.665 04:57:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:15.928 OK 00:04:15.928 04:57:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:15.928 00:04:15.928 real 0m0.270s 00:04:15.928 user 0m0.146s 00:04:15.928 sys 0m0.139s 00:04:15.928 04:57:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.928 04:57:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:15.928 ************************************ 00:04:15.928 END TEST rpc_client 00:04:15.928 ************************************ 00:04:15.928 04:57:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.928 04:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.928 04:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.928 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.928 ************************************ 00:04:15.928 START TEST json_config 00:04:15.928 ************************************ 00:04:15.928 04:57:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.928 04:57:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.928 04:57:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.928 04:57:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.190 04:57:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.190 04:57:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.190 04:57:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.190 04:57:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.190 04:57:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.190 04:57:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:16.190 04:57:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.190 04:57:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.190 04:57:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.190 04:57:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.190 04:57:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.190 04:57:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.190 04:57:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.190 04:57:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.190 04:57:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.190 --rc genhtml_branch_coverage=1 00:04:16.190 --rc genhtml_function_coverage=1 00:04:16.190 --rc genhtml_legend=1 00:04:16.190 --rc geninfo_all_blocks=1 00:04:16.190 --rc geninfo_unexecuted_blocks=1 00:04:16.190 00:04:16.190 ' 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.190 --rc genhtml_branch_coverage=1 00:04:16.190 --rc genhtml_function_coverage=1 00:04:16.190 --rc genhtml_legend=1 00:04:16.190 --rc geninfo_all_blocks=1 00:04:16.190 --rc geninfo_unexecuted_blocks=1 00:04:16.190 00:04:16.190 ' 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.190 --rc genhtml_branch_coverage=1 00:04:16.190 --rc genhtml_function_coverage=1 00:04:16.190 --rc genhtml_legend=1 00:04:16.190 --rc geninfo_all_blocks=1 00:04:16.190 --rc geninfo_unexecuted_blocks=1 00:04:16.190 00:04:16.190 ' 00:04:16.190 04:57:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.190 --rc genhtml_branch_coverage=1 00:04:16.190 --rc genhtml_function_coverage=1 00:04:16.190 --rc genhtml_legend=1 00:04:16.190 --rc geninfo_all_blocks=1 00:04:16.190 --rc geninfo_unexecuted_blocks=1 00:04:16.190 00:04:16.190 ' 00:04:16.190 04:57:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.190 04:57:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.190 04:57:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.190 04:57:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.190 04:57:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.190 04:57:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.191 04:57:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.191 04:57:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.191 04:57:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.191 04:57:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:16.191 04:57:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.191 04:57:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:16.191 INFO: JSON configuration test init 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.191 04:57:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:16.191 04:57:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.191 04:57:29 json_config -- json_config/common.sh@10 -- # shift 00:04:16.191 04:57:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.191 04:57:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.191 04:57:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.191 04:57:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.191 04:57:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.191 04:57:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1297505 00:04:16.191 04:57:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.191 Waiting for target to run... 00:04:16.191 04:57:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1297505 /var/tmp/spdk_tgt.sock 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1297505 ']' 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.191 04:57:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.191 04:57:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.191 [2024-12-09 04:57:30.105899] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:16.191 [2024-12-09 04:57:30.106047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297505 ] 00:04:16.762 [2024-12-09 04:57:30.452422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.762 [2024-12-09 04:57:30.522411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:17.024 04:57:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.024 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.024 04:57:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:17.024 04:57:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:17.024 04:57:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:17.965 04:57:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.965 04:57:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:17.965 04:57:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:17.965 04:57:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:18.226 04:57:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:18.226 04:57:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.226 04:57:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:18.226 04:57:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.226 04:57:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:18.226 04:57:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:18.226 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:18.226 MallocForNvmf0 00:04:18.487 04:57:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:18.487 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:18.487 MallocForNvmf1 00:04:18.487 04:57:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:18.487 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:18.747 [2024-12-09 04:57:32.549329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.747 04:57:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.747 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:19.008 04:57:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:19.008 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:19.008 04:57:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:19.008 04:57:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:19.268 04:57:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:19.268 04:57:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:19.268 [2024-12-09 04:57:33.255549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:19.528 04:57:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:19.528 04:57:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.528 04:57:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.528 04:57:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:19.528 04:57:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.528 04:57:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.528 04:57:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:19.528 04:57:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.528 04:57:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.528 MallocBdevForConfigChangeCheck 00:04:19.790 04:57:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:19.790 04:57:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.790 04:57:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.790 04:57:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:19.790 04:57:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.052 04:57:33 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:20.052 INFO: shutting down applications... 00:04:20.052 04:57:33 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:20.052 04:57:33 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:20.052 04:57:33 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:20.052 04:57:33 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:20.312 Calling clear_iscsi_subsystem 00:04:20.312 Calling clear_nvmf_subsystem 00:04:20.312 Calling clear_nbd_subsystem 00:04:20.312 Calling clear_ublk_subsystem 00:04:20.312 Calling clear_vhost_blk_subsystem 00:04:20.312 Calling clear_vhost_scsi_subsystem 00:04:20.312 Calling clear_bdev_subsystem 00:04:20.312 04:57:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:20.312 04:57:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:20.312 04:57:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:20.573 04:57:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.573 04:57:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:20.573 04:57:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:20.834 04:57:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:20.834 04:57:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:20.834 04:57:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:20.834 04:57:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:20.834 04:57:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.834 04:57:34 json_config -- json_config/common.sh@35 -- # [[ -n 1297505 ]] 00:04:20.834 04:57:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1297505 00:04:20.834 04:57:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.835 04:57:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.835 04:57:34 json_config -- json_config/common.sh@41 -- # kill -0 1297505 00:04:20.835 04:57:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.406 04:57:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.406 04:57:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.406 04:57:35 json_config -- json_config/common.sh@41 -- # kill -0 1297505 00:04:21.406 04:57:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.406 04:57:35 json_config -- json_config/common.sh@43 -- # break 00:04:21.406 04:57:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.406 04:57:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.406 SPDK target shutdown done 00:04:21.406 04:57:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:21.406 INFO: relaunching applications... 00:04:21.406 04:57:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.406 04:57:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.406 04:57:35 json_config -- json_config/common.sh@10 -- # shift 00:04:21.406 04:57:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.406 04:57:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.406 04:57:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.406 04:57:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.406 04:57:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.406 04:57:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1298672 00:04:21.406 04:57:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.406 Waiting for target to run... 00:04:21.406 04:57:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1298672 /var/tmp/spdk_tgt.sock 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 1298672 ']' 00:04:21.406 04:57:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.406 04:57:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.406 [2024-12-09 04:57:35.237322] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:21.406 [2024-12-09 04:57:35.237447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298672 ] 00:04:21.666 [2024-12-09 04:57:35.591021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.926 [2024-12-09 04:57:35.666140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.867 [2024-12-09 04:57:36.502873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.867 [2024-12-09 04:57:36.535260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.867 04:57:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.867 04:57:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:22.867 04:57:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.867 00:04:22.867 04:57:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:22.867 04:57:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:22.867 INFO: Checking if target configuration is the same... 00:04:22.867 04:57:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.867 04:57:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:22.867 04:57:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.867 + '[' 2 -ne 2 ']' 00:04:22.867 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:22.867 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:22.867 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.867 +++ basename /dev/fd/62 00:04:22.867 ++ mktemp /tmp/62.XXX 00:04:22.867 + tmp_file_1=/tmp/62.WfA 00:04:22.867 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.867 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.867 + tmp_file_2=/tmp/spdk_tgt_config.json.70u 00:04:22.867 + ret=0 00:04:22.867 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.127 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.127 + diff -u /tmp/62.WfA /tmp/spdk_tgt_config.json.70u 00:04:23.127 + echo 'INFO: JSON config files are the same' 00:04:23.127 INFO: JSON config files are the same 00:04:23.127 + rm /tmp/62.WfA /tmp/spdk_tgt_config.json.70u 00:04:23.127 + exit 0 00:04:23.127 04:57:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:23.127 04:57:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:23.127 INFO: changing configuration and checking if this can be detected... 00:04:23.127 04:57:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.127 04:57:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.388 04:57:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:23.388 04:57:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.389 04:57:37 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.389 + '[' 2 -ne 2 ']' 00:04:23.389 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.389 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.389 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.389 +++ basename /dev/fd/62 00:04:23.389 ++ mktemp /tmp/62.XXX 00:04:23.389 + tmp_file_1=/tmp/62.xQs 00:04:23.389 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.389 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.389 + tmp_file_2=/tmp/spdk_tgt_config.json.BNs 00:04:23.389 + ret=0 00:04:23.389 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.650 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.650 + diff -u /tmp/62.xQs /tmp/spdk_tgt_config.json.BNs 00:04:23.650 + ret=1 00:04:23.650 + echo '=== Start of file: /tmp/62.xQs ===' 00:04:23.650 + cat /tmp/62.xQs 00:04:23.650 + echo '=== End of file: /tmp/62.xQs ===' 00:04:23.650 + echo '' 00:04:23.650 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BNs ===' 00:04:23.650 + cat /tmp/spdk_tgt_config.json.BNs 00:04:23.650 + echo '=== End of file: /tmp/spdk_tgt_config.json.BNs ===' 00:04:23.650 + echo '' 00:04:23.650 + rm /tmp/62.xQs /tmp/spdk_tgt_config.json.BNs 00:04:23.650 + exit 1 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:23.650 INFO: configuration change detected. 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 1298672 ]] 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.650 04:57:37 json_config -- json_config/json_config.sh@330 -- # killprocess 1298672 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 1298672 ']' 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@958 -- # kill -0 1298672 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@959 -- # uname 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1298672 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1298672' 00:04:23.650 killing process with pid 1298672 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@973 -- # kill 1298672 00:04:23.650 04:57:37 json_config -- common/autotest_common.sh@978 -- # wait 1298672 00:04:24.596 04:57:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.596 04:57:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:24.596 04:57:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.596 04:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.596 04:57:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:24.596 04:57:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:24.596 INFO: Success 00:04:24.596 00:04:24.596 real 0m8.531s 00:04:24.596 user 0m10.231s 00:04:24.596 sys 0m2.164s 00:04:24.596 04:57:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.596 04:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.596 ************************************ 00:04:24.596 END TEST json_config 00:04:24.596 ************************************ 00:04:24.596 04:57:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:24.596 04:57:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.596 04:57:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.596 04:57:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.596 ************************************ 00:04:24.596 START TEST json_config_extra_key 00:04:24.596 ************************************ 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.596 04:57:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.596 --rc genhtml_branch_coverage=1 00:04:24.596 --rc genhtml_function_coverage=1 00:04:24.596 --rc genhtml_legend=1 00:04:24.596 --rc geninfo_all_blocks=1 00:04:24.596 --rc geninfo_unexecuted_blocks=1 00:04:24.596 00:04:24.596 ' 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.596 --rc genhtml_branch_coverage=1 00:04:24.596 --rc genhtml_function_coverage=1 00:04:24.596 --rc genhtml_legend=1 00:04:24.596 --rc geninfo_all_blocks=1 00:04:24.596 --rc geninfo_unexecuted_blocks=1 00:04:24.596 00:04:24.596 ' 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.596 --rc genhtml_branch_coverage=1 00:04:24.596 --rc genhtml_function_coverage=1 00:04:24.596 --rc genhtml_legend=1 00:04:24.596 --rc geninfo_all_blocks=1 00:04:24.596 --rc geninfo_unexecuted_blocks=1 00:04:24.596 00:04:24.596 ' 00:04:24.596 04:57:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.596 --rc genhtml_branch_coverage=1 00:04:24.596 --rc genhtml_function_coverage=1 00:04:24.596 --rc genhtml_legend=1 00:04:24.596 --rc geninfo_all_blocks=1 00:04:24.596 --rc geninfo_unexecuted_blocks=1 00:04:24.596 00:04:24.596 ' 00:04:24.596 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.596 04:57:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.597 04:57:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.597 04:57:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.597 04:57:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.597 04:57:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.597 04:57:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.597 04:57:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.597 04:57:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.597 04:57:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:24.597 04:57:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.597 04:57:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:24.597 INFO: launching applications... 00:04:24.597 04:57:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1299461 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.859 Waiting for target to run... 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1299461 /var/tmp/spdk_tgt.sock 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1299461 ']' 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.859 04:57:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.859 04:57:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.859 [2024-12-09 04:57:38.705924] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:24.859 [2024-12-09 04:57:38.706070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299461 ] 00:04:25.120 [2024-12-09 04:57:39.038776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.120 [2024-12-09 04:57:39.109558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.691 04:57:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.691 04:57:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:25.691 00:04:25.691 04:57:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:25.691 INFO: shutting down applications... 00:04:25.691 04:57:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1299461 ]] 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1299461 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1299461 00:04:25.691 04:57:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.264 04:57:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.264 04:57:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.264 04:57:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1299461 00:04:26.264 04:57:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.838 04:57:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.838 04:57:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.838 04:57:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1299461 00:04:26.838 04:57:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.099 04:57:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.099 04:57:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.099 04:57:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1299461 00:04:27.100 04:57:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.100 04:57:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:27.100 04:57:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.100 04:57:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.100 SPDK target shutdown done 00:04:27.100 04:57:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:27.100 Success 00:04:27.100 00:04:27.100 real 0m2.648s 00:04:27.100 user 0m2.278s 00:04:27.100 sys 0m0.560s 00:04:27.100 04:57:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.100 04:57:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.100 ************************************ 00:04:27.100 END TEST json_config_extra_key 00:04:27.100 ************************************ 00:04:27.100 04:57:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.100 04:57:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.100 04:57:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.100 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.362 ************************************ 00:04:27.362 START TEST alias_rpc 00:04:27.362 ************************************ 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.362 * Looking for test storage... 00:04:27.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.362 04:57:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.362 --rc genhtml_branch_coverage=1 00:04:27.362 --rc genhtml_function_coverage=1 00:04:27.362 --rc genhtml_legend=1 00:04:27.362 --rc geninfo_all_blocks=1 00:04:27.362 --rc geninfo_unexecuted_blocks=1 00:04:27.362 00:04:27.362 ' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.362 --rc genhtml_branch_coverage=1 00:04:27.362 --rc genhtml_function_coverage=1 00:04:27.362 --rc genhtml_legend=1 00:04:27.362 --rc geninfo_all_blocks=1 00:04:27.362 --rc geninfo_unexecuted_blocks=1 00:04:27.362 00:04:27.362 ' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.362 --rc genhtml_branch_coverage=1 00:04:27.362 --rc genhtml_function_coverage=1 00:04:27.362 --rc genhtml_legend=1 00:04:27.362 --rc geninfo_all_blocks=1 00:04:27.362 --rc geninfo_unexecuted_blocks=1 00:04:27.362 00:04:27.362 ' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.362 --rc genhtml_branch_coverage=1 00:04:27.362 --rc genhtml_function_coverage=1 00:04:27.362 --rc genhtml_legend=1 00:04:27.362 --rc geninfo_all_blocks=1 00:04:27.362 --rc geninfo_unexecuted_blocks=1 00:04:27.362 00:04:27.362 ' 00:04:27.362 04:57:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.362 04:57:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1299949 00:04:27.362 04:57:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1299949 00:04:27.362 04:57:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1299949 ']' 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.362 04:57:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.624 [2024-12-09 04:57:41.393652] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:27.624 [2024-12-09 04:57:41.393762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299949 ] 00:04:27.624 [2024-12-09 04:57:41.532357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.624 [2024-12-09 04:57:41.610490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.197 04:57:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.197 04:57:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.197 04:57:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:28.457 04:57:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1299949 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1299949 ']' 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1299949 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299949 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299949' 00:04:28.457 killing process with pid 1299949 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 1299949 00:04:28.457 04:57:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 1299949 00:04:29.844 00:04:29.844 real 0m2.493s 00:04:29.844 user 0m2.545s 00:04:29.844 sys 0m0.525s 00:04:29.844 04:57:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.844 04:57:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.844 ************************************ 00:04:29.844 END TEST alias_rpc 00:04:29.844 ************************************ 00:04:29.844 04:57:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:29.844 04:57:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.844 04:57:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.844 04:57:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.844 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.844 ************************************ 00:04:29.844 START TEST spdkcli_tcp 00:04:29.844 ************************************ 00:04:29.844 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.844 * Looking for test storage... 00:04:29.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:29.844 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.844 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.844 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.105 04:57:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.105 --rc genhtml_branch_coverage=1 00:04:30.105 --rc genhtml_function_coverage=1 00:04:30.105 --rc genhtml_legend=1 00:04:30.105 --rc geninfo_all_blocks=1 00:04:30.105 --rc geninfo_unexecuted_blocks=1 00:04:30.105 00:04:30.105 ' 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.105 --rc genhtml_branch_coverage=1 00:04:30.105 --rc genhtml_function_coverage=1 00:04:30.105 --rc genhtml_legend=1 00:04:30.105 --rc geninfo_all_blocks=1 00:04:30.105 --rc geninfo_unexecuted_blocks=1 00:04:30.105 00:04:30.105 ' 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.105 --rc genhtml_branch_coverage=1 00:04:30.105 --rc genhtml_function_coverage=1 00:04:30.105 --rc genhtml_legend=1 00:04:30.105 --rc geninfo_all_blocks=1 00:04:30.105 --rc geninfo_unexecuted_blocks=1 00:04:30.105 00:04:30.105 ' 00:04:30.105 04:57:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.105 --rc genhtml_branch_coverage=1 00:04:30.105 --rc genhtml_function_coverage=1 00:04:30.105 --rc genhtml_legend=1 00:04:30.105 --rc geninfo_all_blocks=1 00:04:30.105 --rc geninfo_unexecuted_blocks=1 00:04:30.106 00:04:30.106 ' 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1300672 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1300672 00:04:30.106 04:57:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1300672 ']' 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.106 04:57:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.106 [2024-12-09 04:57:43.995900] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:30.106 [2024-12-09 04:57:43.996033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300672 ] 00:04:30.366 [2024-12-09 04:57:44.140502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.366 [2024-12-09 04:57:44.218259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.366 [2024-12-09 04:57:44.218284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.942 04:57:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.942 04:57:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:30.942 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1300690 00:04:30.942 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.942 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.942 [ 00:04:30.942 "bdev_malloc_delete", 00:04:30.942 "bdev_malloc_create", 00:04:30.942 "bdev_null_resize", 00:04:30.942 "bdev_null_delete", 00:04:30.942 "bdev_null_create", 00:04:30.942 "bdev_nvme_cuse_unregister", 00:04:30.942 "bdev_nvme_cuse_register", 00:04:30.942 "bdev_opal_new_user", 00:04:30.942 "bdev_opal_set_lock_state", 00:04:30.942 "bdev_opal_delete", 00:04:30.942 "bdev_opal_get_info", 00:04:30.942 "bdev_opal_create", 00:04:30.942 "bdev_nvme_opal_revert", 00:04:30.942 "bdev_nvme_opal_init", 00:04:30.942 "bdev_nvme_send_cmd", 00:04:30.942 "bdev_nvme_set_keys", 00:04:30.943 "bdev_nvme_get_path_iostat", 00:04:30.943 "bdev_nvme_get_mdns_discovery_info", 00:04:30.943 "bdev_nvme_stop_mdns_discovery", 00:04:30.943 "bdev_nvme_start_mdns_discovery", 00:04:30.943 "bdev_nvme_set_multipath_policy", 00:04:30.943 "bdev_nvme_set_preferred_path", 00:04:30.943 "bdev_nvme_get_io_paths", 00:04:30.943 "bdev_nvme_remove_error_injection", 00:04:30.943 "bdev_nvme_add_error_injection", 00:04:30.943 "bdev_nvme_get_discovery_info", 00:04:30.943 "bdev_nvme_stop_discovery", 00:04:30.943 "bdev_nvme_start_discovery", 00:04:30.943 "bdev_nvme_get_controller_health_info", 00:04:30.943 "bdev_nvme_disable_controller", 00:04:30.943 "bdev_nvme_enable_controller", 00:04:30.943 "bdev_nvme_reset_controller", 00:04:30.943 "bdev_nvme_get_transport_statistics", 00:04:30.943 "bdev_nvme_apply_firmware", 00:04:30.943 "bdev_nvme_detach_controller", 00:04:30.943 "bdev_nvme_get_controllers", 00:04:30.943 "bdev_nvme_attach_controller", 00:04:30.943 "bdev_nvme_set_hotplug", 00:04:30.943 "bdev_nvme_set_options", 00:04:30.943 "bdev_passthru_delete", 00:04:30.943 "bdev_passthru_create", 00:04:30.943 "bdev_lvol_set_parent_bdev", 00:04:30.943 "bdev_lvol_set_parent", 00:04:30.943 "bdev_lvol_check_shallow_copy", 00:04:30.943 "bdev_lvol_start_shallow_copy", 00:04:30.943 "bdev_lvol_grow_lvstore", 00:04:30.943 "bdev_lvol_get_lvols", 00:04:30.943 "bdev_lvol_get_lvstores", 00:04:30.943 "bdev_lvol_delete", 00:04:30.943 "bdev_lvol_set_read_only", 00:04:30.943 "bdev_lvol_resize", 00:04:30.943 "bdev_lvol_decouple_parent", 00:04:30.943 "bdev_lvol_inflate", 00:04:30.943 "bdev_lvol_rename", 00:04:30.943 "bdev_lvol_clone_bdev", 00:04:30.943 "bdev_lvol_clone", 00:04:30.943 "bdev_lvol_snapshot", 00:04:30.943 "bdev_lvol_create", 00:04:30.943 "bdev_lvol_delete_lvstore", 00:04:30.943 "bdev_lvol_rename_lvstore", 00:04:30.943 "bdev_lvol_create_lvstore", 00:04:30.943 "bdev_raid_set_options", 00:04:30.943 "bdev_raid_remove_base_bdev", 00:04:30.943 "bdev_raid_add_base_bdev", 00:04:30.943 "bdev_raid_delete", 00:04:30.943 "bdev_raid_create", 00:04:30.943 "bdev_raid_get_bdevs", 00:04:30.943 "bdev_error_inject_error", 00:04:30.943 "bdev_error_delete", 00:04:30.943 "bdev_error_create", 00:04:30.943 "bdev_split_delete", 00:04:30.943 "bdev_split_create", 00:04:30.943 "bdev_delay_delete", 00:04:30.943 "bdev_delay_create", 00:04:30.943 "bdev_delay_update_latency", 00:04:30.943 "bdev_zone_block_delete", 00:04:30.943 "bdev_zone_block_create", 00:04:30.943 "blobfs_create", 00:04:30.943 "blobfs_detect", 00:04:30.943 "blobfs_set_cache_size", 00:04:30.943 "bdev_aio_delete", 00:04:30.943 "bdev_aio_rescan", 00:04:30.943 "bdev_aio_create", 00:04:30.943 "bdev_ftl_set_property", 00:04:30.943 "bdev_ftl_get_properties", 00:04:30.943 "bdev_ftl_get_stats", 00:04:30.943 "bdev_ftl_unmap", 00:04:30.943 "bdev_ftl_unload", 00:04:30.943 "bdev_ftl_delete", 00:04:30.943 "bdev_ftl_load", 00:04:30.943 "bdev_ftl_create", 00:04:30.943 "bdev_virtio_attach_controller", 00:04:30.943 "bdev_virtio_scsi_get_devices", 00:04:30.943 "bdev_virtio_detach_controller", 00:04:30.943 "bdev_virtio_blk_set_hotplug", 00:04:30.943 "bdev_iscsi_delete", 00:04:30.943 "bdev_iscsi_create", 00:04:30.943 "bdev_iscsi_set_options", 00:04:30.943 "accel_error_inject_error", 00:04:30.943 "ioat_scan_accel_module", 00:04:30.943 "dsa_scan_accel_module", 00:04:30.943 "iaa_scan_accel_module", 00:04:30.943 "keyring_file_remove_key", 00:04:30.943 "keyring_file_add_key", 00:04:30.943 "keyring_linux_set_options", 00:04:30.943 "fsdev_aio_delete", 00:04:30.943 "fsdev_aio_create", 00:04:30.943 "iscsi_get_histogram", 00:04:30.943 "iscsi_enable_histogram", 00:04:30.943 "iscsi_set_options", 00:04:30.943 "iscsi_get_auth_groups", 00:04:30.943 "iscsi_auth_group_remove_secret", 00:04:30.943 "iscsi_auth_group_add_secret", 00:04:30.943 "iscsi_delete_auth_group", 00:04:30.943 "iscsi_create_auth_group", 00:04:30.943 "iscsi_set_discovery_auth", 00:04:30.943 "iscsi_get_options", 00:04:30.943 "iscsi_target_node_request_logout", 00:04:30.943 "iscsi_target_node_set_redirect", 00:04:30.943 "iscsi_target_node_set_auth", 00:04:30.943 "iscsi_target_node_add_lun", 00:04:30.943 "iscsi_get_stats", 00:04:30.943 "iscsi_get_connections", 00:04:30.943 "iscsi_portal_group_set_auth", 00:04:30.943 "iscsi_start_portal_group", 00:04:30.943 "iscsi_delete_portal_group", 00:04:30.943 "iscsi_create_portal_group", 00:04:30.943 "iscsi_get_portal_groups", 00:04:30.943 "iscsi_delete_target_node", 00:04:30.943 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.943 "iscsi_target_node_add_pg_ig_maps", 00:04:30.943 "iscsi_create_target_node", 00:04:30.943 "iscsi_get_target_nodes", 00:04:30.943 "iscsi_delete_initiator_group", 00:04:30.943 "iscsi_initiator_group_remove_initiators", 00:04:30.943 "iscsi_initiator_group_add_initiators", 00:04:30.943 "iscsi_create_initiator_group", 00:04:30.943 "iscsi_get_initiator_groups", 00:04:30.943 "nvmf_set_crdt", 00:04:30.943 "nvmf_set_config", 00:04:30.943 "nvmf_set_max_subsystems", 00:04:30.943 "nvmf_stop_mdns_prr", 00:04:30.943 "nvmf_publish_mdns_prr", 00:04:30.943 "nvmf_subsystem_get_listeners", 00:04:30.943 "nvmf_subsystem_get_qpairs", 00:04:30.943 "nvmf_subsystem_get_controllers", 00:04:30.943 "nvmf_get_stats", 00:04:30.943 "nvmf_get_transports", 00:04:30.943 "nvmf_create_transport", 00:04:30.943 "nvmf_get_targets", 00:04:30.943 "nvmf_delete_target", 00:04:30.943 "nvmf_create_target", 00:04:30.943 "nvmf_subsystem_allow_any_host", 00:04:30.943 "nvmf_subsystem_set_keys", 00:04:30.943 "nvmf_subsystem_remove_host", 00:04:30.943 "nvmf_subsystem_add_host", 00:04:30.943 "nvmf_ns_remove_host", 00:04:30.943 "nvmf_ns_add_host", 00:04:30.943 "nvmf_subsystem_remove_ns", 00:04:30.943 "nvmf_subsystem_set_ns_ana_group", 00:04:30.943 "nvmf_subsystem_add_ns", 00:04:30.943 "nvmf_subsystem_listener_set_ana_state", 00:04:30.943 "nvmf_discovery_get_referrals", 00:04:30.943 "nvmf_discovery_remove_referral", 00:04:30.943 "nvmf_discovery_add_referral", 00:04:30.943 "nvmf_subsystem_remove_listener", 00:04:30.943 "nvmf_subsystem_add_listener", 00:04:30.943 "nvmf_delete_subsystem", 00:04:30.943 "nvmf_create_subsystem", 00:04:30.943 "nvmf_get_subsystems", 00:04:30.943 "env_dpdk_get_mem_stats", 00:04:30.943 "nbd_get_disks", 00:04:30.943 "nbd_stop_disk", 00:04:30.943 "nbd_start_disk", 00:04:30.943 "ublk_recover_disk", 00:04:30.943 "ublk_get_disks", 00:04:30.943 "ublk_stop_disk", 00:04:30.943 "ublk_start_disk", 00:04:30.943 "ublk_destroy_target", 00:04:30.943 "ublk_create_target", 00:04:30.943 "virtio_blk_create_transport", 00:04:30.943 "virtio_blk_get_transports", 00:04:30.943 "vhost_controller_set_coalescing", 00:04:30.943 "vhost_get_controllers", 00:04:30.943 "vhost_delete_controller", 00:04:30.943 "vhost_create_blk_controller", 00:04:30.943 "vhost_scsi_controller_remove_target", 00:04:30.943 "vhost_scsi_controller_add_target", 00:04:30.943 "vhost_start_scsi_controller", 00:04:30.943 "vhost_create_scsi_controller", 00:04:30.943 "thread_set_cpumask", 00:04:30.943 "scheduler_set_options", 00:04:30.943 "framework_get_governor", 00:04:30.943 "framework_get_scheduler", 00:04:30.943 "framework_set_scheduler", 00:04:30.943 "framework_get_reactors", 00:04:30.943 "thread_get_io_channels", 00:04:30.943 "thread_get_pollers", 00:04:30.943 "thread_get_stats", 00:04:30.943 "framework_monitor_context_switch", 00:04:30.943 "spdk_kill_instance", 00:04:30.943 "log_enable_timestamps", 00:04:30.943 "log_get_flags", 00:04:30.943 "log_clear_flag", 00:04:30.943 "log_set_flag", 00:04:30.943 "log_get_level", 00:04:30.943 "log_set_level", 00:04:30.943 "log_get_print_level", 00:04:30.943 "log_set_print_level", 00:04:30.943 "framework_enable_cpumask_locks", 00:04:30.943 "framework_disable_cpumask_locks", 00:04:30.943 "framework_wait_init", 00:04:30.943 "framework_start_init", 00:04:30.943 "scsi_get_devices", 00:04:30.943 "bdev_get_histogram", 00:04:30.943 "bdev_enable_histogram", 00:04:30.943 "bdev_set_qos_limit", 00:04:30.943 "bdev_set_qd_sampling_period", 00:04:30.943 "bdev_get_bdevs", 00:04:30.943 "bdev_reset_iostat", 00:04:30.943 "bdev_get_iostat", 00:04:30.943 "bdev_examine", 00:04:30.943 "bdev_wait_for_examine", 00:04:30.943 "bdev_set_options", 00:04:30.943 "accel_get_stats", 00:04:30.943 "accel_set_options", 00:04:30.943 "accel_set_driver", 00:04:30.943 "accel_crypto_key_destroy", 00:04:30.943 "accel_crypto_keys_get", 00:04:30.943 "accel_crypto_key_create", 00:04:30.943 "accel_assign_opc", 00:04:30.943 "accel_get_module_info", 00:04:30.943 "accel_get_opc_assignments", 00:04:30.943 "vmd_rescan", 00:04:30.943 "vmd_remove_device", 00:04:30.943 "vmd_enable", 00:04:30.943 "sock_get_default_impl", 00:04:30.943 "sock_set_default_impl", 00:04:30.943 "sock_impl_set_options", 00:04:30.943 "sock_impl_get_options", 00:04:30.943 "iobuf_get_stats", 00:04:30.943 "iobuf_set_options", 00:04:30.943 "keyring_get_keys", 00:04:30.943 "framework_get_pci_devices", 00:04:30.943 "framework_get_config", 00:04:30.943 "framework_get_subsystems", 00:04:30.943 "fsdev_set_opts", 00:04:30.943 "fsdev_get_opts", 00:04:30.943 "trace_get_info", 00:04:30.943 "trace_get_tpoint_group_mask", 00:04:30.943 "trace_disable_tpoint_group", 00:04:30.943 "trace_enable_tpoint_group", 00:04:30.943 "trace_clear_tpoint_mask", 00:04:30.943 "trace_set_tpoint_mask", 00:04:30.943 "notify_get_notifications", 00:04:30.943 "notify_get_types", 00:04:30.943 "spdk_get_version", 00:04:30.943 "rpc_get_methods" 00:04:30.943 ] 00:04:30.943 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.943 04:57:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.943 04:57:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.222 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.223 04:57:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1300672 00:04:31.223 04:57:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1300672 ']' 00:04:31.223 04:57:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1300672 00:04:31.223 04:57:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:31.223 04:57:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.223 04:57:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1300672 00:04:31.223 04:57:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.223 04:57:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.223 04:57:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1300672' 00:04:31.223 killing process with pid 1300672 00:04:31.223 04:57:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1300672 00:04:31.223 04:57:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1300672 00:04:32.603 00:04:32.603 real 0m2.514s 00:04:32.603 user 0m4.349s 00:04:32.603 sys 0m0.609s 00:04:32.603 04:57:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.603 04:57:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.603 ************************************ 00:04:32.603 END TEST spdkcli_tcp 00:04:32.603 ************************************ 00:04:32.603 04:57:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.603 04:57:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.603 04:57:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.603 04:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:32.603 ************************************ 00:04:32.603 START TEST dpdk_mem_utility 00:04:32.603 ************************************ 00:04:32.603 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.603 * Looking for test storage... 00:04:32.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:32.603 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.603 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.603 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.603 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.603 04:57:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.603 04:57:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.604 04:57:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.604 --rc genhtml_branch_coverage=1 00:04:32.604 --rc genhtml_function_coverage=1 00:04:32.604 --rc genhtml_legend=1 00:04:32.604 --rc geninfo_all_blocks=1 00:04:32.604 --rc geninfo_unexecuted_blocks=1 00:04:32.604 00:04:32.604 ' 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.604 --rc genhtml_branch_coverage=1 00:04:32.604 --rc genhtml_function_coverage=1 00:04:32.604 --rc genhtml_legend=1 00:04:32.604 --rc geninfo_all_blocks=1 00:04:32.604 --rc geninfo_unexecuted_blocks=1 00:04:32.604 00:04:32.604 ' 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.604 --rc genhtml_branch_coverage=1 00:04:32.604 --rc genhtml_function_coverage=1 00:04:32.604 --rc genhtml_legend=1 00:04:32.604 --rc geninfo_all_blocks=1 00:04:32.604 --rc geninfo_unexecuted_blocks=1 00:04:32.604 00:04:32.604 ' 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.604 --rc genhtml_branch_coverage=1 00:04:32.604 --rc genhtml_function_coverage=1 00:04:32.604 --rc genhtml_legend=1 00:04:32.604 --rc geninfo_all_blocks=1 00:04:32.604 --rc geninfo_unexecuted_blocks=1 00:04:32.604 00:04:32.604 ' 00:04:32.604 04:57:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.604 04:57:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1301106 00:04:32.604 04:57:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1301106 00:04:32.604 04:57:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1301106 ']' 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.604 04:57:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.604 [2024-12-09 04:57:46.574634] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:32.604 [2024-12-09 04:57:46.574769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301106 ] 00:04:32.863 [2024-12-09 04:57:46.721400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.863 [2024-12-09 04:57:46.802493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.435 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.435 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:33.435 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.435 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.435 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.435 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.435 { 00:04:33.435 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.435 } 00:04:33.435 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.435 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.435 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:33.435 1 heaps totaling size 824.000000 MiB 00:04:33.435 size: 824.000000 MiB heap id: 0 00:04:33.435 end heaps---------- 00:04:33.435 9 mempools totaling size 603.782043 MiB 00:04:33.435 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.435 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.435 size: 100.555481 MiB name: bdev_io_1301106 00:04:33.435 size: 50.003479 MiB name: msgpool_1301106 00:04:33.435 size: 36.509338 MiB name: fsdev_io_1301106 00:04:33.435 size: 21.763794 MiB name: PDU_Pool 00:04:33.435 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.435 size: 4.133484 MiB name: evtpool_1301106 00:04:33.435 size: 0.026123 MiB name: Session_Pool 00:04:33.435 end mempools------- 00:04:33.435 6 memzones totaling size 4.142822 MiB 00:04:33.435 size: 1.000366 MiB name: RG_ring_0_1301106 00:04:33.435 size: 1.000366 MiB name: RG_ring_1_1301106 00:04:33.435 size: 1.000366 MiB name: RG_ring_4_1301106 00:04:33.435 size: 1.000366 MiB name: RG_ring_5_1301106 00:04:33.435 size: 0.125366 MiB name: RG_ring_2_1301106 00:04:33.435 size: 0.015991 MiB name: RG_ring_3_1301106 00:04:33.435 end memzones------- 00:04:33.435 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.435 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:04:33.435 list of free elements. size: 16.847595 MiB 00:04:33.435 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:33.435 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:33.435 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:33.435 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:33.435 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:33.435 element at address: 0x200019a00000 with size: 0.999329 MiB 00:04:33.435 element at address: 0x200000400000 with size: 0.998108 MiB 00:04:33.435 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:33.435 element at address: 0x200019200000 with size: 0.959900 MiB 00:04:33.435 element at address: 0x200019d00040 with size: 0.937256 MiB 00:04:33.435 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:33.435 element at address: 0x20001b400000 with size: 0.583191 MiB 00:04:33.435 element at address: 0x200000c00000 with size: 0.495300 MiB 00:04:33.435 element at address: 0x200019600000 with size: 0.491150 MiB 00:04:33.436 element at address: 0x200019e00000 with size: 0.485657 MiB 00:04:33.436 element at address: 0x200012c00000 with size: 0.436157 MiB 00:04:33.436 element at address: 0x200028800000 with size: 0.411072 MiB 00:04:33.436 element at address: 0x200000800000 with size: 0.355286 MiB 00:04:33.436 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:04:33.436 list of standard malloc elements. size: 199.221497 MiB 00:04:33.436 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:33.436 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:33.436 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:33.436 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:33.436 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:33.436 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:33.436 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:33.436 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:33.436 element at address: 0x200012bff040 with size: 0.000427 MiB 00:04:33.436 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:04:33.436 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:33.436 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff200 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff300 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff400 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff500 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff600 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff700 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff800 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bff900 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:33.436 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:33.436 list of memzone associated elements. size: 607.930908 MiB 00:04:33.436 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:33.436 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.436 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:33.436 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.436 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:33.436 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1301106_0 00:04:33.436 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:33.436 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1301106_0 00:04:33.436 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:33.436 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1301106_0 00:04:33.436 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:33.436 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.436 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:33.436 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.436 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:33.436 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1301106_0 00:04:33.436 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:33.436 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1301106 00:04:33.436 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:33.436 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1301106 00:04:33.436 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:33.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.436 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:33.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.436 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:33.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.436 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:33.436 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.436 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:33.436 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1301106 00:04:33.436 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:33.436 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1301106 00:04:33.436 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:33.436 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1301106 00:04:33.436 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:33.436 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1301106 00:04:33.436 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:33.436 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1301106 00:04:33.436 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:33.436 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1301106 00:04:33.436 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:04:33.436 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.436 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:04:33.436 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.436 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:04:33.436 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.436 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:33.436 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1301106 00:04:33.436 element at address: 0x20000085f180 with size: 0.125549 MiB 00:04:33.436 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1301106 00:04:33.436 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:04:33.436 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.436 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:04:33.436 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.436 element at address: 0x20000085af40 with size: 0.016174 MiB 00:04:33.436 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1301106 00:04:33.436 element at address: 0x20002886f540 with size: 0.002502 MiB 00:04:33.436 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.436 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:04:33.436 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1301106 00:04:33.436 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:33.436 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1301106 00:04:33.436 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:33.436 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1301106 00:04:33.436 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:04:33.436 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.436 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.436 04:57:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1301106 00:04:33.436 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1301106 ']' 00:04:33.437 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1301106 00:04:33.437 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:33.437 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.437 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301106 00:04:33.698 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.698 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.698 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301106' 00:04:33.698 killing process with pid 1301106 00:04:33.698 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1301106 00:04:33.698 04:57:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1301106 00:04:34.642 00:04:34.642 real 0m2.371s 00:04:34.642 user 0m2.311s 00:04:34.642 sys 0m0.536s 00:04:34.642 04:57:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.643 04:57:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 ************************************ 00:04:34.643 END TEST dpdk_mem_utility 00:04:34.643 ************************************ 00:04:34.904 04:57:48 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.904 04:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.904 04:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.904 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.904 ************************************ 00:04:34.904 START TEST event 00:04:34.904 ************************************ 00:04:34.904 04:57:48 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.904 * Looking for test storage... 00:04:34.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.904 04:57:48 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.904 04:57:48 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.904 04:57:48 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.904 04:57:48 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.904 04:57:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.904 04:57:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.904 04:57:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.904 04:57:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.904 04:57:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.904 04:57:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.904 04:57:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.904 04:57:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.904 04:57:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.904 04:57:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.904 04:57:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.904 04:57:48 event -- scripts/common.sh@344 -- # case "$op" in 00:04:34.904 04:57:48 event -- scripts/common.sh@345 -- # : 1 00:04:34.904 04:57:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.904 04:57:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.166 04:57:48 event -- scripts/common.sh@365 -- # decimal 1 00:04:35.166 04:57:48 event -- scripts/common.sh@353 -- # local d=1 00:04:35.166 04:57:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.166 04:57:48 event -- scripts/common.sh@355 -- # echo 1 00:04:35.166 04:57:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.166 04:57:48 event -- scripts/common.sh@366 -- # decimal 2 00:04:35.166 04:57:48 event -- scripts/common.sh@353 -- # local d=2 00:04:35.166 04:57:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.166 04:57:48 event -- scripts/common.sh@355 -- # echo 2 00:04:35.166 04:57:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.166 04:57:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.166 04:57:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.166 04:57:48 event -- scripts/common.sh@368 -- # return 0 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.166 --rc genhtml_branch_coverage=1 00:04:35.166 --rc genhtml_function_coverage=1 00:04:35.166 --rc genhtml_legend=1 00:04:35.166 --rc geninfo_all_blocks=1 00:04:35.166 --rc geninfo_unexecuted_blocks=1 00:04:35.166 00:04:35.166 ' 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.166 --rc genhtml_branch_coverage=1 00:04:35.166 --rc genhtml_function_coverage=1 00:04:35.166 --rc genhtml_legend=1 00:04:35.166 --rc geninfo_all_blocks=1 00:04:35.166 --rc geninfo_unexecuted_blocks=1 00:04:35.166 00:04:35.166 ' 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.166 --rc genhtml_branch_coverage=1 00:04:35.166 --rc genhtml_function_coverage=1 00:04:35.166 --rc genhtml_legend=1 00:04:35.166 --rc geninfo_all_blocks=1 00:04:35.166 --rc geninfo_unexecuted_blocks=1 00:04:35.166 00:04:35.166 ' 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.166 --rc genhtml_branch_coverage=1 00:04:35.166 --rc genhtml_function_coverage=1 00:04:35.166 --rc genhtml_legend=1 00:04:35.166 --rc geninfo_all_blocks=1 00:04:35.166 --rc geninfo_unexecuted_blocks=1 00:04:35.166 00:04:35.166 ' 00:04:35.166 04:57:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:35.166 04:57:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:35.166 04:57:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:35.166 04:57:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.166 04:57:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.166 ************************************ 00:04:35.166 START TEST event_perf 00:04:35.166 ************************************ 00:04:35.166 04:57:48 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.166 Running I/O for 1 seconds...[2024-12-09 04:57:49.003751] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:35.166 [2024-12-09 04:57:49.003867] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301818 ] 00:04:35.166 [2024-12-09 04:57:49.152125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.428 [2024-12-09 04:57:49.237200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.428 [2024-12-09 04:57:49.237318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.428 [2024-12-09 04:57:49.237407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.428 Running I/O for 1 seconds...[2024-12-09 04:57:49.237435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.371 00:04:36.371 lcore 0: 199869 00:04:36.371 lcore 1: 199873 00:04:36.371 lcore 2: 199871 00:04:36.371 lcore 3: 199870 00:04:36.371 done. 00:04:36.632 00:04:36.632 real 0m1.423s 00:04:36.632 user 0m4.253s 00:04:36.632 sys 0m0.166s 00:04:36.632 04:57:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.633 04:57:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.633 ************************************ 00:04:36.633 END TEST event_perf 00:04:36.633 ************************************ 00:04:36.633 04:57:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.633 04:57:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.633 04:57:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.633 04:57:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.633 ************************************ 00:04:36.633 START TEST event_reactor 00:04:36.633 ************************************ 00:04:36.633 04:57:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.633 [2024-12-09 04:57:50.508959] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:36.633 [2024-12-09 04:57:50.509067] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302068 ] 00:04:36.894 [2024-12-09 04:57:50.656559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.894 [2024-12-09 04:57:50.737417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.280 test_start 00:04:38.280 oneshot 00:04:38.280 tick 100 00:04:38.280 tick 100 00:04:38.280 tick 250 00:04:38.280 tick 100 00:04:38.280 tick 100 00:04:38.280 tick 100 00:04:38.280 tick 250 00:04:38.280 tick 500 00:04:38.280 tick 100 00:04:38.280 tick 100 00:04:38.280 tick 250 00:04:38.280 tick 100 00:04:38.280 tick 100 00:04:38.280 test_end 00:04:38.280 00:04:38.280 real 0m1.398s 00:04:38.280 user 0m1.245s 00:04:38.280 sys 0m0.146s 00:04:38.280 04:57:51 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.280 04:57:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:38.280 ************************************ 00:04:38.280 END TEST event_reactor 00:04:38.280 ************************************ 00:04:38.280 04:57:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.280 04:57:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.280 04:57:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.280 04:57:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.280 ************************************ 00:04:38.280 START TEST event_reactor_perf 00:04:38.280 ************************************ 00:04:38.280 04:57:51 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.280 [2024-12-09 04:57:51.980153] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:38.280 [2024-12-09 04:57:51.980260] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302313 ] 00:04:38.280 [2024-12-09 04:57:52.122661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.280 [2024-12-09 04:57:52.203242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.688 test_start 00:04:39.688 test_end 00:04:39.688 Performance: 426371 events per second 00:04:39.688 00:04:39.688 real 0m1.385s 00:04:39.688 user 0m1.247s 00:04:39.688 sys 0m0.132s 00:04:39.688 04:57:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.688 04:57:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.688 ************************************ 00:04:39.688 END TEST event_reactor_perf 00:04:39.688 ************************************ 00:04:39.688 04:57:53 event -- event/event.sh@49 -- # uname -s 00:04:39.688 04:57:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:39.688 04:57:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:39.688 04:57:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.688 04:57:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.688 04:57:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.688 ************************************ 00:04:39.688 START TEST event_scheduler 00:04:39.688 ************************************ 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:39.688 * Looking for test storage... 00:04:39.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.688 04:57:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.688 --rc genhtml_branch_coverage=1 00:04:39.688 --rc genhtml_function_coverage=1 00:04:39.688 --rc genhtml_legend=1 00:04:39.688 --rc geninfo_all_blocks=1 00:04:39.688 --rc geninfo_unexecuted_blocks=1 00:04:39.688 00:04:39.688 ' 00:04:39.688 04:57:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.689 --rc genhtml_branch_coverage=1 00:04:39.689 --rc genhtml_function_coverage=1 00:04:39.689 --rc genhtml_legend=1 00:04:39.689 --rc geninfo_all_blocks=1 00:04:39.689 --rc geninfo_unexecuted_blocks=1 00:04:39.689 00:04:39.689 ' 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:39.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.689 --rc genhtml_branch_coverage=1 00:04:39.689 --rc genhtml_function_coverage=1 00:04:39.689 --rc genhtml_legend=1 00:04:39.689 --rc geninfo_all_blocks=1 00:04:39.689 --rc geninfo_unexecuted_blocks=1 00:04:39.689 00:04:39.689 ' 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:39.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.689 --rc genhtml_branch_coverage=1 00:04:39.689 --rc genhtml_function_coverage=1 00:04:39.689 --rc genhtml_legend=1 00:04:39.689 --rc geninfo_all_blocks=1 00:04:39.689 --rc geninfo_unexecuted_blocks=1 00:04:39.689 00:04:39.689 ' 00:04:39.689 04:57:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:39.689 04:57:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1302666 00:04:39.689 04:57:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.689 04:57:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:39.689 04:57:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1302666 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1302666 ']' 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.689 04:57:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.950 [2024-12-09 04:57:53.689551] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:39.950 [2024-12-09 04:57:53.689665] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302666 ] 00:04:39.950 [2024-12-09 04:57:53.840966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.212 [2024-12-09 04:57:53.953297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.212 [2024-12-09 04:57:53.953432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.212 [2024-12-09 04:57:53.953531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.212 [2024-12-09 04:57:53.953558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.473 04:57:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.473 04:57:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:40.473 04:57:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:40.473 04:57:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.473 04:57:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.473 [2024-12-09 04:57:54.463837] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:40.473 [2024-12-09 04:57:54.463865] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:40.473 [2024-12-09 04:57:54.463884] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:40.473 [2024-12-09 04:57:54.463895] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:40.473 [2024-12-09 04:57:54.463905] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:40.734 04:57:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.734 04:57:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:40.734 04:57:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.734 04:57:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 [2024-12-09 04:57:54.727862] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:40.734 04:57:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.734 04:57:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:40.997 04:57:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.997 04:57:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 ************************************ 00:04:40.997 START TEST scheduler_create_thread 00:04:40.997 ************************************ 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 2 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 3 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 4 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 5 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 6 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 7 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 8 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 9 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.997 10 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.997 04:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.382 04:57:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.382 04:57:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.382 04:57:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.382 04:57:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.382 04:57:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.326 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.326 04:57:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:43.326 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.326 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.269 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.269 04:57:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.269 04:57:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.269 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.269 04:57:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.843 04:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.843 00:04:44.843 real 0m3.894s 00:04:44.843 user 0m0.024s 00:04:44.843 sys 0m0.007s 00:04:44.843 04:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.843 04:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.843 ************************************ 00:04:44.843 END TEST scheduler_create_thread 00:04:44.843 ************************************ 00:04:44.843 04:57:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:44.843 04:57:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1302666 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1302666 ']' 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1302666 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302666 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302666' 00:04:44.843 killing process with pid 1302666 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1302666 00:04:44.843 04:57:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1302666 00:04:45.104 [2024-12-09 04:57:59.045171] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:45.676 00:04:45.676 real 0m6.232s 00:04:45.676 user 0m12.642s 00:04:45.676 sys 0m0.569s 00:04:45.676 04:57:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.676 04:57:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.676 ************************************ 00:04:45.676 END TEST event_scheduler 00:04:45.676 ************************************ 00:04:45.937 04:57:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:45.937 04:57:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:45.937 04:57:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.937 04:57:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.937 04:57:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.937 ************************************ 00:04:45.937 START TEST app_repeat 00:04:45.937 ************************************ 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1304009 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1304009' 00:04:45.938 Process app_repeat pid: 1304009 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:45.938 spdk_app_start Round 0 00:04:45.938 04:57:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1304009 /var/tmp/spdk-nbd.sock 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1304009 ']' 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.938 04:57:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.938 [2024-12-09 04:57:59.791391] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:45.938 [2024-12-09 04:57:59.791518] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304009 ] 00:04:45.938 [2024-12-09 04:57:59.927532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.199 [2024-12-09 04:58:00.003440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.199 [2024-12-09 04:58:00.003468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.772 04:58:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.772 04:58:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:46.772 04:58:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.034 Malloc0 00:04:47.034 04:58:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.034 Malloc1 00:04:47.035 04:58:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.035 04:58:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.035 04:58:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.035 04:58:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.035 04:58:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.035 04:58:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.296 04:58:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.297 /dev/nbd0 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.297 1+0 records in 00:04:47.297 1+0 records out 00:04:47.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280838 s, 14.6 MB/s 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.297 04:58:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.297 04:58:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.557 /dev/nbd1 00:04:47.557 04:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.557 04:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.557 04:58:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.557 1+0 records in 00:04:47.557 1+0 records out 00:04:47.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279673 s, 14.6 MB/s 00:04:47.558 04:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.558 04:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.558 04:58:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.558 04:58:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.558 04:58:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.558 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.558 04:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.558 04:58:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.558 04:58:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.558 04:58:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.818 { 00:04:47.818 "nbd_device": "/dev/nbd0", 00:04:47.818 "bdev_name": "Malloc0" 00:04:47.818 }, 00:04:47.818 { 00:04:47.818 "nbd_device": "/dev/nbd1", 00:04:47.818 "bdev_name": "Malloc1" 00:04:47.818 } 00:04:47.818 ]' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.818 { 00:04:47.818 "nbd_device": "/dev/nbd0", 00:04:47.818 "bdev_name": "Malloc0" 00:04:47.818 }, 00:04:47.818 { 00:04:47.818 "nbd_device": "/dev/nbd1", 00:04:47.818 "bdev_name": "Malloc1" 00:04:47.818 } 00:04:47.818 ]' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.818 /dev/nbd1' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.818 /dev/nbd1' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.818 256+0 records in 00:04:47.818 256+0 records out 00:04:47.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118792 s, 88.3 MB/s 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.818 256+0 records in 00:04:47.818 256+0 records out 00:04:47.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013515 s, 77.6 MB/s 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.818 256+0 records in 00:04:47.818 256+0 records out 00:04:47.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154522 s, 67.9 MB/s 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.818 04:58:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.079 04:58:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.339 04:58:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.598 04:58:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.598 04:58:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.858 04:58:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.433 [2024-12-09 04:58:03.275468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.433 [2024-12-09 04:58:03.343337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.433 [2024-12-09 04:58:03.343342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.695 [2024-12-09 04:58:03.446133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.695 [2024-12-09 04:58:03.446176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.242 04:58:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.242 04:58:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.242 spdk_app_start Round 1 00:04:52.242 04:58:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1304009 /var/tmp/spdk-nbd.sock 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1304009 ']' 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.242 04:58:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.242 04:58:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.242 Malloc0 00:04:52.242 04:58:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.503 Malloc1 00:04:52.503 04:58:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.503 04:58:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.503 04:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.504 /dev/nbd0 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.504 04:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.504 1+0 records in 00:04:52.504 1+0 records out 00:04:52.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273679 s, 15.0 MB/s 00:04:52.504 04:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.765 /dev/nbd1 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.765 1+0 records in 00:04:52.765 1+0 records out 00:04:52.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021374 s, 19.2 MB/s 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.765 04:58:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.765 04:58:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.027 { 00:04:53.027 "nbd_device": "/dev/nbd0", 00:04:53.027 "bdev_name": "Malloc0" 00:04:53.027 }, 00:04:53.027 { 00:04:53.027 "nbd_device": "/dev/nbd1", 00:04:53.027 "bdev_name": "Malloc1" 00:04:53.027 } 00:04:53.027 ]' 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.027 { 00:04:53.027 "nbd_device": "/dev/nbd0", 00:04:53.027 "bdev_name": "Malloc0" 00:04:53.027 }, 00:04:53.027 { 00:04:53.027 "nbd_device": "/dev/nbd1", 00:04:53.027 "bdev_name": "Malloc1" 00:04:53.027 } 00:04:53.027 ]' 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.027 /dev/nbd1' 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.027 /dev/nbd1' 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.027 04:58:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.028 256+0 records in 00:04:53.028 256+0 records out 00:04:53.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127448 s, 82.3 MB/s 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.028 256+0 records in 00:04:53.028 256+0 records out 00:04:53.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137127 s, 76.5 MB/s 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.028 04:58:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.028 256+0 records in 00:04:53.028 256+0 records out 00:04:53.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152712 s, 68.7 MB/s 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.028 04:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.289 04:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.550 04:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.812 04:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.813 04:58:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.813 04:58:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.073 04:58:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.642 [2024-12-09 04:58:08.476829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.643 [2024-12-09 04:58:08.545977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.643 [2024-12-09 04:58:08.546050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.904 [2024-12-09 04:58:08.649041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.904 [2024-12-09 04:58:08.649085] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.452 04:58:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.452 04:58:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.452 spdk_app_start Round 2 00:04:57.452 04:58:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1304009 /var/tmp/spdk-nbd.sock 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1304009 ']' 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.452 04:58:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.452 04:58:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.452 04:58:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.452 04:58:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.452 Malloc0 00:04:57.452 04:58:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.713 Malloc1 00:04:57.713 04:58:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.713 04:58:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.713 /dev/nbd0 00:04:57.974 04:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.974 04:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.974 1+0 records in 00:04:57.974 1+0 records out 00:04:57.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272538 s, 15.0 MB/s 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.974 04:58:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.974 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.975 /dev/nbd1 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.975 1+0 records in 00:04:57.975 1+0 records out 00:04:57.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308206 s, 13.3 MB/s 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.975 04:58:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.975 04:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.235 { 00:04:58.235 "nbd_device": "/dev/nbd0", 00:04:58.235 "bdev_name": "Malloc0" 00:04:58.235 }, 00:04:58.235 { 00:04:58.235 "nbd_device": "/dev/nbd1", 00:04:58.235 "bdev_name": "Malloc1" 00:04:58.235 } 00:04:58.235 ]' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.235 { 00:04:58.235 "nbd_device": "/dev/nbd0", 00:04:58.235 "bdev_name": "Malloc0" 00:04:58.235 }, 00:04:58.235 { 00:04:58.235 "nbd_device": "/dev/nbd1", 00:04:58.235 "bdev_name": "Malloc1" 00:04:58.235 } 00:04:58.235 ]' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.235 /dev/nbd1' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.235 /dev/nbd1' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.235 256+0 records in 00:04:58.235 256+0 records out 00:04:58.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120577 s, 87.0 MB/s 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.235 04:58:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.495 256+0 records in 00:04:58.495 256+0 records out 00:04:58.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132665 s, 79.0 MB/s 00:04:58.495 04:58:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.495 04:58:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.495 256+0 records in 00:04:58.495 256+0 records out 00:04:58.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155465 s, 67.4 MB/s 00:04:58.495 04:58:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.495 04:58:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.496 04:58:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.757 04:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.017 04:58:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.017 04:58:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.277 04:58:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.847 [2024-12-09 04:58:13.755211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.847 [2024-12-09 04:58:13.823383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.847 [2024-12-09 04:58:13.823388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.107 [2024-12-09 04:58:13.926518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.107 [2024-12-09 04:58:13.926562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.649 04:58:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1304009 /var/tmp/spdk-nbd.sock 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1304009 ']' 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.649 04:58:16 event.app_repeat -- event/event.sh@39 -- # killprocess 1304009 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1304009 ']' 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1304009 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304009 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304009' 00:05:02.649 killing process with pid 1304009 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1304009 00:05:02.649 04:58:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1304009 00:05:03.221 spdk_app_start is called in Round 0. 00:05:03.222 Shutdown signal received, stop current app iteration 00:05:03.222 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.222 spdk_app_start is called in Round 1. 00:05:03.222 Shutdown signal received, stop current app iteration 00:05:03.222 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.222 spdk_app_start is called in Round 2. 00:05:03.222 Shutdown signal received, stop current app iteration 00:05:03.222 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.222 spdk_app_start is called in Round 3. 00:05:03.222 Shutdown signal received, stop current app iteration 00:05:03.222 04:58:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:03.222 04:58:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:03.222 00:05:03.222 real 0m17.225s 00:05:03.222 user 0m36.890s 00:05:03.222 sys 0m2.406s 00:05:03.222 04:58:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.222 04:58:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.222 ************************************ 00:05:03.222 END TEST app_repeat 00:05:03.222 ************************************ 00:05:03.222 04:58:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:03.222 04:58:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.222 04:58:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.222 04:58:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.222 04:58:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.222 ************************************ 00:05:03.222 START TEST cpu_locks 00:05:03.222 ************************************ 00:05:03.222 04:58:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.222 * Looking for test storage... 00:05:03.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.222 04:58:17 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.222 04:58:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.222 04:58:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.222 04:58:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.222 04:58:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.483 04:58:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:03.483 04:58:17 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.483 04:58:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.483 --rc genhtml_branch_coverage=1 00:05:03.483 --rc genhtml_function_coverage=1 00:05:03.483 --rc genhtml_legend=1 00:05:03.483 --rc geninfo_all_blocks=1 00:05:03.483 --rc geninfo_unexecuted_blocks=1 00:05:03.483 00:05:03.483 ' 00:05:03.483 04:58:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.483 --rc genhtml_branch_coverage=1 00:05:03.483 --rc genhtml_function_coverage=1 00:05:03.483 --rc genhtml_legend=1 00:05:03.483 --rc geninfo_all_blocks=1 00:05:03.483 --rc geninfo_unexecuted_blocks=1 00:05:03.483 00:05:03.483 ' 00:05:03.483 04:58:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.483 --rc genhtml_branch_coverage=1 00:05:03.483 --rc genhtml_function_coverage=1 00:05:03.483 --rc genhtml_legend=1 00:05:03.483 --rc geninfo_all_blocks=1 00:05:03.483 --rc geninfo_unexecuted_blocks=1 00:05:03.483 00:05:03.483 ' 00:05:03.484 04:58:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.484 --rc genhtml_branch_coverage=1 00:05:03.484 --rc genhtml_function_coverage=1 00:05:03.484 --rc genhtml_legend=1 00:05:03.484 --rc geninfo_all_blocks=1 00:05:03.484 --rc geninfo_unexecuted_blocks=1 00:05:03.484 00:05:03.484 ' 00:05:03.484 04:58:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:03.484 04:58:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:03.484 04:58:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:03.484 04:58:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:03.484 04:58:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.484 04:58:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.484 04:58:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.484 ************************************ 00:05:03.484 START TEST default_locks 00:05:03.484 ************************************ 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1307613 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1307613 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1307613 ']' 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.484 04:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.484 [2024-12-09 04:58:17.377032] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:03.484 [2024-12-09 04:58:17.377160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307613 ] 00:05:03.745 [2024-12-09 04:58:17.523802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.745 [2024-12-09 04:58:17.610641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.336 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.336 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:04.336 04:58:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1307613 00:05:04.336 04:58:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1307613 00:05:04.336 04:58:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.597 lslocks: write error 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1307613 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1307613 ']' 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1307613 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.597 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1307613 00:05:04.857 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.857 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.857 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1307613' 00:05:04.857 killing process with pid 1307613 00:05:04.857 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1307613 00:05:04.857 04:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1307613 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1307613 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1307613 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1307613 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1307613 ']' 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1307613) - No such process 00:05:05.804 ERROR: process (pid: 1307613) is no longer running 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.804 04:58:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:05.805 04:58:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.805 04:58:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.805 04:58:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.805 00:05:05.805 real 0m2.524s 00:05:05.805 user 0m2.486s 00:05:05.805 sys 0m0.693s 00:05:05.805 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.805 04:58:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.805 ************************************ 00:05:05.805 END TEST default_locks 00:05:05.805 ************************************ 00:05:06.066 04:58:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.066 04:58:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.066 04:58:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.066 04:58:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.066 ************************************ 00:05:06.066 START TEST default_locks_via_rpc 00:05:06.066 ************************************ 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1308305 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1308305 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1308305 ']' 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.066 04:58:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.066 [2024-12-09 04:58:19.959405] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:06.066 [2024-12-09 04:58:19.959541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308305 ] 00:05:06.325 [2024-12-09 04:58:20.108798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.325 [2024-12-09 04:58:20.193243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1308305 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.895 04:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1308305 ']' 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308305' 00:05:07.154 killing process with pid 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1308305 00:05:07.154 04:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1308305 00:05:08.535 00:05:08.535 real 0m2.366s 00:05:08.535 user 0m2.349s 00:05:08.535 sys 0m0.595s 00:05:08.535 04:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.535 04:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.535 ************************************ 00:05:08.535 END TEST default_locks_via_rpc 00:05:08.535 ************************************ 00:05:08.535 04:58:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:08.535 04:58:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.535 04:58:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.535 04:58:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.535 ************************************ 00:05:08.535 START TEST non_locking_app_on_locked_coremask 00:05:08.535 ************************************ 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1308686 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1308686 /var/tmp/spdk.sock 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1308686 ']' 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.535 04:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.535 [2024-12-09 04:58:22.396644] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:08.535 [2024-12-09 04:58:22.396760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308686 ] 00:05:08.797 [2024-12-09 04:58:22.534763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.797 [2024-12-09 04:58:22.617259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1308967 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1308967 /var/tmp/spdk2.sock 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1308967 ']' 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.369 04:58:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.369 [2024-12-09 04:58:23.222812] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:09.369 [2024-12-09 04:58:23.222934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308967 ] 00:05:09.632 [2024-12-09 04:58:23.375226] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.632 [2024-12-09 04:58:23.375268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.632 [2024-12-09 04:58:23.526415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.573 04:58:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.573 04:58:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.573 04:58:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1308686 00:05:10.573 04:58:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1308686 00:05:10.573 04:58:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.148 lslocks: write error 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1308686 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1308686 ']' 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1308686 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.148 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308686 00:05:11.409 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.409 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.409 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308686' 00:05:11.409 killing process with pid 1308686 00:05:11.409 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1308686 00:05:11.409 04:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1308686 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1308967 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1308967 ']' 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1308967 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308967 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308967' 00:05:13.957 killing process with pid 1308967 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1308967 00:05:13.957 04:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1308967 00:05:14.899 00:05:14.899 real 0m6.437s 00:05:14.899 user 0m6.612s 00:05:14.899 sys 0m1.154s 00:05:14.899 04:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.899 04:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.899 ************************************ 00:05:14.899 END TEST non_locking_app_on_locked_coremask 00:05:14.899 ************************************ 00:05:14.899 04:58:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.899 04:58:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.899 04:58:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.899 04:58:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.899 ************************************ 00:05:14.899 START TEST locking_app_on_unlocked_coremask 00:05:14.899 ************************************ 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1310068 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1310068 /var/tmp/spdk.sock 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1310068 ']' 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.899 04:58:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.161 [2024-12-09 04:58:28.931026] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:15.161 [2024-12-09 04:58:28.931150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310068 ] 00:05:15.161 [2024-12-09 04:58:29.077794] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.161 [2024-12-09 04:58:29.077846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.421 [2024-12-09 04:58:29.159252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1310202 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1310202 /var/tmp/spdk2.sock 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1310202 ']' 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.989 04:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.989 [2024-12-09 04:58:29.772943] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:15.989 [2024-12-09 04:58:29.773053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310202 ] 00:05:15.989 [2024-12-09 04:58:29.923776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.249 [2024-12-09 04:58:30.090414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.190 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.190 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.190 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1310202 00:05:17.190 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1310202 00:05:17.190 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.763 lslocks: write error 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1310068 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1310068 ']' 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1310068 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310068 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310068' 00:05:17.763 killing process with pid 1310068 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1310068 00:05:17.763 04:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1310068 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1310202 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1310202 ']' 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1310202 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310202 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310202' 00:05:20.325 killing process with pid 1310202 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1310202 00:05:20.325 04:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1310202 00:05:21.275 00:05:21.276 real 0m6.312s 00:05:21.276 user 0m6.461s 00:05:21.276 sys 0m1.122s 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.276 ************************************ 00:05:21.276 END TEST locking_app_on_unlocked_coremask 00:05:21.276 ************************************ 00:05:21.276 04:58:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.276 04:58:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.276 04:58:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.276 04:58:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.276 ************************************ 00:05:21.276 START TEST locking_app_on_locked_coremask 00:05:21.276 ************************************ 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1311446 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1311446 /var/tmp/spdk.sock 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1311446 ']' 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.276 04:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.538 [2024-12-09 04:58:35.306782] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:21.538 [2024-12-09 04:58:35.306960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311446 ] 00:05:21.538 [2024-12-09 04:58:35.451763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.538 [2024-12-09 04:58:35.532610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1311459 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1311459 /var/tmp/spdk2.sock 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1311459 /var/tmp/spdk2.sock 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1311459 /var/tmp/spdk2.sock 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1311459 ']' 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.116 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.377 [2024-12-09 04:58:36.146247] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:22.377 [2024-12-09 04:58:36.146358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311459 ] 00:05:22.377 [2024-12-09 04:58:36.294545] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1311446 has claimed it. 00:05:22.377 [2024-12-09 04:58:36.294590] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1311459) - No such process 00:05:22.949 ERROR: process (pid: 1311459) is no longer running 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1311446 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1311446 00:05:22.949 04:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.208 lslocks: write error 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1311446 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1311446 ']' 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1311446 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1311446 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1311446' 00:05:23.208 killing process with pid 1311446 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1311446 00:05:23.208 04:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1311446 00:05:24.590 00:05:24.590 real 0m3.137s 00:05:24.590 user 0m3.316s 00:05:24.590 sys 0m0.775s 00:05:24.590 04:58:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.590 04:58:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.590 ************************************ 00:05:24.590 END TEST locking_app_on_locked_coremask 00:05:24.590 ************************************ 00:05:24.590 04:58:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:24.590 04:58:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.591 04:58:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.591 04:58:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 ************************************ 00:05:24.591 START TEST locking_overlapped_coremask 00:05:24.591 ************************************ 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1312036 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1312036 /var/tmp/spdk.sock 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1312036 ']' 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.591 04:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 [2024-12-09 04:58:38.517182] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:24.591 [2024-12-09 04:58:38.517313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312036 ] 00:05:24.851 [2024-12-09 04:58:38.663348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.851 [2024-12-09 04:58:38.746470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.851 [2024-12-09 04:58:38.746567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.851 [2024-12-09 04:58:38.746592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1312169 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1312169 /var/tmp/spdk2.sock 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1312169 /var/tmp/spdk2.sock 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1312169 /var/tmp/spdk2.sock 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1312169 ']' 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.423 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.424 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.424 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.424 [2024-12-09 04:58:39.371754] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:25.424 [2024-12-09 04:58:39.371871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312169 ] 00:05:25.685 [2024-12-09 04:58:39.557998] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1312036 has claimed it. 00:05:25.685 [2024-12-09 04:58:39.558054] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1312169) - No such process 00:05:26.257 ERROR: process (pid: 1312169) is no longer running 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1312036 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1312036 ']' 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1312036 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.257 04:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312036 00:05:26.257 04:58:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.257 04:58:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.257 04:58:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312036' 00:05:26.257 killing process with pid 1312036 00:05:26.257 04:58:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1312036 00:05:26.257 04:58:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1312036 00:05:27.644 00:05:27.644 real 0m2.786s 00:05:27.644 user 0m7.485s 00:05:27.644 sys 0m0.607s 00:05:27.644 04:58:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.644 04:58:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.644 ************************************ 00:05:27.644 END TEST locking_overlapped_coremask 00:05:27.644 ************************************ 00:05:27.644 04:58:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:27.644 04:58:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.644 04:58:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.644 04:58:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.644 ************************************ 00:05:27.644 START TEST locking_overlapped_coremask_via_rpc 00:05:27.644 ************************************ 00:05:27.644 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1312541 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1312541 /var/tmp/spdk.sock 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1312541 ']' 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.645 04:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.645 [2024-12-09 04:58:41.373959] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:27.645 [2024-12-09 04:58:41.374069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312541 ] 00:05:27.645 [2024-12-09 04:58:41.513028] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.645 [2024-12-09 04:58:41.513070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.645 [2024-12-09 04:58:41.597587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.645 [2024-12-09 04:58:41.597679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.645 [2024-12-09 04:58:41.597705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1312875 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1312875 /var/tmp/spdk2.sock 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1312875 ']' 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.217 04:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.478 [2024-12-09 04:58:42.229531] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:28.478 [2024-12-09 04:58:42.229643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312875 ] 00:05:28.478 [2024-12-09 04:58:42.412589] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.478 [2024-12-09 04:58:42.412639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.739 [2024-12-09 04:58:42.617214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.739 [2024-12-09 04:58:42.617315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.739 [2024-12-09 04:58:42.617341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.127 [2024-12-09 04:58:43.733924] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1312541 has claimed it. 00:05:30.127 request: 00:05:30.127 { 00:05:30.127 "method": "framework_enable_cpumask_locks", 00:05:30.127 "req_id": 1 00:05:30.127 } 00:05:30.127 Got JSON-RPC error response 00:05:30.127 response: 00:05:30.127 { 00:05:30.127 "code": -32603, 00:05:30.127 "message": "Failed to claim CPU core: 2" 00:05:30.127 } 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1312541 /var/tmp/spdk.sock 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1312541 ']' 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1312875 /var/tmp/spdk2.sock 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1312875 ']' 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.127 04:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.127 00:05:30.127 real 0m2.826s 00:05:30.127 user 0m0.869s 00:05:30.127 sys 0m0.161s 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.127 04:58:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.127 ************************************ 00:05:30.127 END TEST locking_overlapped_coremask_via_rpc 00:05:30.127 ************************************ 00:05:30.389 04:58:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.389 04:58:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1312541 ]] 00:05:30.389 04:58:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1312541 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1312541 ']' 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1312541 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312541 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312541' 00:05:30.389 killing process with pid 1312541 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1312541 00:05:30.389 04:58:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1312541 00:05:31.500 04:58:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1312875 ]] 00:05:31.500 04:58:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1312875 00:05:31.500 04:58:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1312875 ']' 00:05:31.500 04:58:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1312875 00:05:31.500 04:58:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.500 04:58:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.500 04:58:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312875 00:05:31.763 04:58:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:31.763 04:58:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:31.763 04:58:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312875' 00:05:31.763 killing process with pid 1312875 00:05:31.763 04:58:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1312875 00:05:31.763 04:58:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1312875 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1312541 ]] 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1312541 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1312541 ']' 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1312541 00:05:32.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1312541) - No such process 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1312541 is not found' 00:05:32.739 Process with pid 1312541 is not found 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1312875 ]] 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1312875 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1312875 ']' 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1312875 00:05:32.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1312875) - No such process 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1312875 is not found' 00:05:32.739 Process with pid 1312875 is not found 00:05:32.739 04:58:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.739 00:05:32.739 real 0m29.655s 00:05:32.739 user 0m50.074s 00:05:32.739 sys 0m6.321s 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.739 04:58:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.739 ************************************ 00:05:32.739 END TEST cpu_locks 00:05:32.739 ************************************ 00:05:32.739 00:05:32.739 real 0m58.009s 00:05:32.739 user 1m46.639s 00:05:32.739 sys 0m10.177s 00:05:32.739 04:58:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.739 04:58:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.739 ************************************ 00:05:32.739 END TEST event 00:05:32.739 ************************************ 00:05:33.022 04:58:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:33.022 04:58:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.022 04:58:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.022 04:58:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.022 ************************************ 00:05:33.022 START TEST thread 00:05:33.022 ************************************ 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:33.022 * Looking for test storage... 00:05:33.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.022 04:58:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.022 04:58:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.022 04:58:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.022 04:58:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.022 04:58:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.022 04:58:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.022 04:58:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.022 04:58:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.022 04:58:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.022 04:58:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.022 04:58:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.022 04:58:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:33.022 04:58:46 thread -- scripts/common.sh@345 -- # : 1 00:05:33.022 04:58:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.022 04:58:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.022 04:58:46 thread -- scripts/common.sh@365 -- # decimal 1 00:05:33.022 04:58:46 thread -- scripts/common.sh@353 -- # local d=1 00:05:33.022 04:58:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.022 04:58:46 thread -- scripts/common.sh@355 -- # echo 1 00:05:33.022 04:58:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.022 04:58:46 thread -- scripts/common.sh@366 -- # decimal 2 00:05:33.022 04:58:46 thread -- scripts/common.sh@353 -- # local d=2 00:05:33.022 04:58:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.022 04:58:46 thread -- scripts/common.sh@355 -- # echo 2 00:05:33.022 04:58:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.022 04:58:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.022 04:58:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.022 04:58:46 thread -- scripts/common.sh@368 -- # return 0 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.022 --rc genhtml_branch_coverage=1 00:05:33.022 --rc genhtml_function_coverage=1 00:05:33.022 --rc genhtml_legend=1 00:05:33.022 --rc geninfo_all_blocks=1 00:05:33.022 --rc geninfo_unexecuted_blocks=1 00:05:33.022 00:05:33.022 ' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.022 --rc genhtml_branch_coverage=1 00:05:33.022 --rc genhtml_function_coverage=1 00:05:33.022 --rc genhtml_legend=1 00:05:33.022 --rc geninfo_all_blocks=1 00:05:33.022 --rc geninfo_unexecuted_blocks=1 00:05:33.022 00:05:33.022 ' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.022 --rc genhtml_branch_coverage=1 00:05:33.022 --rc genhtml_function_coverage=1 00:05:33.022 --rc genhtml_legend=1 00:05:33.022 --rc geninfo_all_blocks=1 00:05:33.022 --rc geninfo_unexecuted_blocks=1 00:05:33.022 00:05:33.022 ' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.022 --rc genhtml_branch_coverage=1 00:05:33.022 --rc genhtml_function_coverage=1 00:05:33.022 --rc genhtml_legend=1 00:05:33.022 --rc geninfo_all_blocks=1 00:05:33.022 --rc geninfo_unexecuted_blocks=1 00:05:33.022 00:05:33.022 ' 00:05:33.022 04:58:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.022 04:58:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.294 ************************************ 00:05:33.294 START TEST thread_poller_perf 00:05:33.294 ************************************ 00:05:33.294 04:58:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.294 [2024-12-09 04:58:47.073902] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:33.294 [2024-12-09 04:58:47.074004] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313897 ] 00:05:33.294 [2024-12-09 04:58:47.174333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.294 [2024-12-09 04:58:47.250155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.294 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:34.763 [2024-12-09T03:58:48.760Z] ====================================== 00:05:34.763 [2024-12-09T03:58:48.760Z] busy:2409555516 (cyc) 00:05:34.763 [2024-12-09T03:58:48.760Z] total_run_count: 409000 00:05:34.763 [2024-12-09T03:58:48.760Z] tsc_hz: 2400000000 (cyc) 00:05:34.763 [2024-12-09T03:58:48.760Z] ====================================== 00:05:34.763 [2024-12-09T03:58:48.760Z] poller_cost: 5891 (cyc), 2454 (nsec) 00:05:34.763 00:05:34.763 real 0m1.347s 00:05:34.763 user 0m1.237s 00:05:34.763 sys 0m0.105s 00:05:34.763 04:58:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.763 04:58:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.763 ************************************ 00:05:34.763 END TEST thread_poller_perf 00:05:34.763 ************************************ 00:05:34.763 04:58:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.763 04:58:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.763 04:58:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.763 04:58:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.763 ************************************ 00:05:34.763 START TEST thread_poller_perf 00:05:34.763 ************************************ 00:05:34.763 04:58:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.763 [2024-12-09 04:58:48.499628] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:34.763 [2024-12-09 04:58:48.499724] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314118 ] 00:05:34.763 [2024-12-09 04:58:48.620315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.763 [2024-12-09 04:58:48.696465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.763 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.248 [2024-12-09T03:58:50.245Z] ====================================== 00:05:36.248 [2024-12-09T03:58:50.245Z] busy:2402400348 (cyc) 00:05:36.248 [2024-12-09T03:58:50.245Z] total_run_count: 4927000 00:05:36.248 [2024-12-09T03:58:50.245Z] tsc_hz: 2400000000 (cyc) 00:05:36.248 [2024-12-09T03:58:50.245Z] ====================================== 00:05:36.248 [2024-12-09T03:58:50.245Z] poller_cost: 487 (cyc), 202 (nsec) 00:05:36.248 00:05:36.248 real 0m1.362s 00:05:36.248 user 0m1.237s 00:05:36.248 sys 0m0.120s 00:05:36.248 04:58:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.248 04:58:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.248 ************************************ 00:05:36.248 END TEST thread_poller_perf 00:05:36.248 ************************************ 00:05:36.248 04:58:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.248 00:05:36.248 real 0m3.061s 00:05:36.248 user 0m2.648s 00:05:36.248 sys 0m0.424s 00:05:36.248 04:58:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.248 04:58:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.248 ************************************ 00:05:36.248 END TEST thread 00:05:36.248 ************************************ 00:05:36.248 04:58:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:36.248 04:58:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.248 04:58:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.248 04:58:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.248 04:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.248 ************************************ 00:05:36.248 START TEST app_cmdline 00:05:36.248 ************************************ 00:05:36.248 04:58:49 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.248 * Looking for test storage... 00:05:36.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.248 04:58:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.248 --rc genhtml_branch_coverage=1 00:05:36.248 --rc genhtml_function_coverage=1 00:05:36.248 --rc genhtml_legend=1 00:05:36.248 --rc geninfo_all_blocks=1 00:05:36.248 --rc geninfo_unexecuted_blocks=1 00:05:36.248 00:05:36.248 ' 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.248 --rc genhtml_branch_coverage=1 00:05:36.248 --rc genhtml_function_coverage=1 00:05:36.248 --rc genhtml_legend=1 00:05:36.248 --rc geninfo_all_blocks=1 00:05:36.248 --rc geninfo_unexecuted_blocks=1 00:05:36.248 00:05:36.248 ' 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.248 --rc genhtml_branch_coverage=1 00:05:36.248 --rc genhtml_function_coverage=1 00:05:36.248 --rc genhtml_legend=1 00:05:36.248 --rc geninfo_all_blocks=1 00:05:36.248 --rc geninfo_unexecuted_blocks=1 00:05:36.248 00:05:36.248 ' 00:05:36.248 04:58:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.248 --rc genhtml_branch_coverage=1 00:05:36.248 --rc genhtml_function_coverage=1 00:05:36.248 --rc genhtml_legend=1 00:05:36.248 --rc geninfo_all_blocks=1 00:05:36.248 --rc geninfo_unexecuted_blocks=1 00:05:36.248 00:05:36.248 ' 00:05:36.248 04:58:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:36.249 04:58:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1314501 00:05:36.249 04:58:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1314501 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1314501 ']' 00:05:36.249 04:58:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.249 04:58:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.533 [2024-12-09 04:58:50.243348] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:36.533 [2024-12-09 04:58:50.243473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314501 ] 00:05:36.533 [2024-12-09 04:58:50.389406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.533 [2024-12-09 04:58:50.473798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.172 04:58:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.172 04:58:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:37.172 04:58:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:37.463 { 00:05:37.463 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:05:37.463 "fields": { 00:05:37.463 "major": 25, 00:05:37.463 "minor": 1, 00:05:37.463 "patch": 0, 00:05:37.463 "suffix": "-pre", 00:05:37.463 "commit": "a2f5e1c2d" 00:05:37.463 } 00:05:37.463 } 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.463 request: 00:05:37.463 { 00:05:37.463 "method": "env_dpdk_get_mem_stats", 00:05:37.463 "req_id": 1 00:05:37.463 } 00:05:37.463 Got JSON-RPC error response 00:05:37.463 response: 00:05:37.463 { 00:05:37.463 "code": -32601, 00:05:37.463 "message": "Method not found" 00:05:37.463 } 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.463 04:58:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1314501 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1314501 ']' 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1314501 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.463 04:58:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1314501 00:05:37.724 04:58:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.724 04:58:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.724 04:58:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1314501' 00:05:37.724 killing process with pid 1314501 00:05:37.724 04:58:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 1314501 00:05:37.724 04:58:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 1314501 00:05:39.110 00:05:39.110 real 0m2.731s 00:05:39.110 user 0m2.925s 00:05:39.110 sys 0m0.615s 00:05:39.110 04:58:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.110 04:58:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.110 ************************************ 00:05:39.110 END TEST app_cmdline 00:05:39.110 ************************************ 00:05:39.111 04:58:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:39.111 04:58:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.111 04:58:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.111 04:58:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.111 ************************************ 00:05:39.111 START TEST version 00:05:39.111 ************************************ 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:39.111 * Looking for test storage... 00:05:39.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.111 04:58:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.111 04:58:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.111 04:58:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.111 04:58:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.111 04:58:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.111 04:58:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.111 04:58:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.111 04:58:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.111 04:58:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.111 04:58:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.111 04:58:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.111 04:58:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:39.111 04:58:52 version -- scripts/common.sh@345 -- # : 1 00:05:39.111 04:58:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.111 04:58:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.111 04:58:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:39.111 04:58:52 version -- scripts/common.sh@353 -- # local d=1 00:05:39.111 04:58:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.111 04:58:52 version -- scripts/common.sh@355 -- # echo 1 00:05:39.111 04:58:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.111 04:58:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:39.111 04:58:52 version -- scripts/common.sh@353 -- # local d=2 00:05:39.111 04:58:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.111 04:58:52 version -- scripts/common.sh@355 -- # echo 2 00:05:39.111 04:58:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.111 04:58:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.111 04:58:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.111 04:58:52 version -- scripts/common.sh@368 -- # return 0 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.111 --rc genhtml_branch_coverage=1 00:05:39.111 --rc genhtml_function_coverage=1 00:05:39.111 --rc genhtml_legend=1 00:05:39.111 --rc geninfo_all_blocks=1 00:05:39.111 --rc geninfo_unexecuted_blocks=1 00:05:39.111 00:05:39.111 ' 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.111 --rc genhtml_branch_coverage=1 00:05:39.111 --rc genhtml_function_coverage=1 00:05:39.111 --rc genhtml_legend=1 00:05:39.111 --rc geninfo_all_blocks=1 00:05:39.111 --rc geninfo_unexecuted_blocks=1 00:05:39.111 00:05:39.111 ' 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.111 --rc genhtml_branch_coverage=1 00:05:39.111 --rc genhtml_function_coverage=1 00:05:39.111 --rc genhtml_legend=1 00:05:39.111 --rc geninfo_all_blocks=1 00:05:39.111 --rc geninfo_unexecuted_blocks=1 00:05:39.111 00:05:39.111 ' 00:05:39.111 04:58:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.111 --rc genhtml_branch_coverage=1 00:05:39.111 --rc genhtml_function_coverage=1 00:05:39.111 --rc genhtml_legend=1 00:05:39.111 --rc geninfo_all_blocks=1 00:05:39.111 --rc geninfo_unexecuted_blocks=1 00:05:39.111 00:05:39.111 ' 00:05:39.111 04:58:52 version -- app/version.sh@17 -- # get_header_version major 00:05:39.111 04:58:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # cut -f2 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.111 04:58:52 version -- app/version.sh@17 -- # major=25 00:05:39.111 04:58:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:39.111 04:58:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # cut -f2 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.111 04:58:52 version -- app/version.sh@18 -- # minor=1 00:05:39.111 04:58:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:39.111 04:58:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # cut -f2 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.111 04:58:52 version -- app/version.sh@19 -- # patch=0 00:05:39.111 04:58:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:39.111 04:58:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # cut -f2 00:05:39.111 04:58:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.111 04:58:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:39.111 04:58:52 version -- app/version.sh@22 -- # version=25.1 00:05:39.111 04:58:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:39.111 04:58:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:39.111 04:58:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:39.111 04:58:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:39.111 04:58:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:39.111 04:58:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:39.111 00:05:39.111 real 0m0.279s 00:05:39.111 user 0m0.158s 00:05:39.111 sys 0m0.167s 00:05:39.111 04:58:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.111 04:58:53 version -- common/autotest_common.sh@10 -- # set +x 00:05:39.111 ************************************ 00:05:39.111 END TEST version 00:05:39.111 ************************************ 00:05:39.111 04:58:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:39.111 04:58:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:39.111 04:58:53 -- spdk/autotest.sh@194 -- # uname -s 00:05:39.111 04:58:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:39.111 04:58:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:39.111 04:58:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:39.111 04:58:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:39.111 04:58:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:39.111 04:58:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:39.111 04:58:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.111 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.373 04:58:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:39.373 04:58:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:39.373 04:58:53 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:39.373 04:58:53 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:39.373 04:58:53 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:39.373 04:58:53 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:39.373 04:58:53 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:39.373 04:58:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.373 04:58:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.373 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.373 ************************************ 00:05:39.373 START TEST nvmf_tcp 00:05:39.373 ************************************ 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:39.373 * Looking for test storage... 00:05:39.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.373 04:58:53 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.373 --rc genhtml_branch_coverage=1 00:05:39.373 --rc genhtml_function_coverage=1 00:05:39.373 --rc genhtml_legend=1 00:05:39.373 --rc geninfo_all_blocks=1 00:05:39.373 --rc geninfo_unexecuted_blocks=1 00:05:39.373 00:05:39.373 ' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.373 --rc genhtml_branch_coverage=1 00:05:39.373 --rc genhtml_function_coverage=1 00:05:39.373 --rc genhtml_legend=1 00:05:39.373 --rc geninfo_all_blocks=1 00:05:39.373 --rc geninfo_unexecuted_blocks=1 00:05:39.373 00:05:39.373 ' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.373 --rc genhtml_branch_coverage=1 00:05:39.373 --rc genhtml_function_coverage=1 00:05:39.373 --rc genhtml_legend=1 00:05:39.373 --rc geninfo_all_blocks=1 00:05:39.373 --rc geninfo_unexecuted_blocks=1 00:05:39.373 00:05:39.373 ' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.373 --rc genhtml_branch_coverage=1 00:05:39.373 --rc genhtml_function_coverage=1 00:05:39.373 --rc genhtml_legend=1 00:05:39.373 --rc geninfo_all_blocks=1 00:05:39.373 --rc geninfo_unexecuted_blocks=1 00:05:39.373 00:05:39.373 ' 00:05:39.373 04:58:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:39.373 04:58:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:39.373 04:58:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.373 04:58:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.635 ************************************ 00:05:39.635 START TEST nvmf_target_core 00:05:39.635 ************************************ 00:05:39.635 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:39.635 * Looking for test storage... 00:05:39.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:39.635 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.635 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.635 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.635 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.636 --rc genhtml_branch_coverage=1 00:05:39.636 --rc genhtml_function_coverage=1 00:05:39.636 --rc genhtml_legend=1 00:05:39.636 --rc geninfo_all_blocks=1 00:05:39.636 --rc geninfo_unexecuted_blocks=1 00:05:39.636 00:05:39.636 ' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.636 --rc genhtml_branch_coverage=1 00:05:39.636 --rc genhtml_function_coverage=1 00:05:39.636 --rc genhtml_legend=1 00:05:39.636 --rc geninfo_all_blocks=1 00:05:39.636 --rc geninfo_unexecuted_blocks=1 00:05:39.636 00:05:39.636 ' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.636 --rc genhtml_branch_coverage=1 00:05:39.636 --rc genhtml_function_coverage=1 00:05:39.636 --rc genhtml_legend=1 00:05:39.636 --rc geninfo_all_blocks=1 00:05:39.636 --rc geninfo_unexecuted_blocks=1 00:05:39.636 00:05:39.636 ' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.636 --rc genhtml_branch_coverage=1 00:05:39.636 --rc genhtml_function_coverage=1 00:05:39.636 --rc genhtml_legend=1 00:05:39.636 --rc geninfo_all_blocks=1 00:05:39.636 --rc geninfo_unexecuted_blocks=1 00:05:39.636 00:05:39.636 ' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:39.636 04:58:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.637 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:39.899 ************************************ 00:05:39.899 START TEST nvmf_abort 00:05:39.899 ************************************ 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:39.899 * Looking for test storage... 00:05:39.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:39.899 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.900 --rc genhtml_branch_coverage=1 00:05:39.900 --rc genhtml_function_coverage=1 00:05:39.900 --rc genhtml_legend=1 00:05:39.900 --rc geninfo_all_blocks=1 00:05:39.900 --rc geninfo_unexecuted_blocks=1 00:05:39.900 00:05:39.900 ' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.900 --rc genhtml_branch_coverage=1 00:05:39.900 --rc genhtml_function_coverage=1 00:05:39.900 --rc genhtml_legend=1 00:05:39.900 --rc geninfo_all_blocks=1 00:05:39.900 --rc geninfo_unexecuted_blocks=1 00:05:39.900 00:05:39.900 ' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.900 --rc genhtml_branch_coverage=1 00:05:39.900 --rc genhtml_function_coverage=1 00:05:39.900 --rc genhtml_legend=1 00:05:39.900 --rc geninfo_all_blocks=1 00:05:39.900 --rc geninfo_unexecuted_blocks=1 00:05:39.900 00:05:39.900 ' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.900 --rc genhtml_branch_coverage=1 00:05:39.900 --rc genhtml_function_coverage=1 00:05:39.900 --rc genhtml_legend=1 00:05:39.900 --rc geninfo_all_blocks=1 00:05:39.900 --rc geninfo_unexecuted_blocks=1 00:05:39.900 00:05:39.900 ' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.900 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.163 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:40.164 04:58:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:48.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:48.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:48.301 Found net devices under 0000:31:00.0: cvl_0_0 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:48.301 Found net devices under 0000:31:00.1: cvl_0_1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:48.301 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:48.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:48.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:05:48.301 00:05:48.301 --- 10.0.0.2 ping statistics --- 00:05:48.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.302 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:48.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:48.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:05:48.302 00:05:48.302 --- 10.0.0.1 ping statistics --- 00:05:48.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.302 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1319315 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1319315 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1319315 ']' 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.302 04:59:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.302 [2024-12-09 04:59:01.525190] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:48.302 [2024-12-09 04:59:01.525312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:48.302 [2024-12-09 04:59:01.692704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.302 [2024-12-09 04:59:01.817314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:48.302 [2024-12-09 04:59:01.817382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:48.302 [2024-12-09 04:59:01.817396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.302 [2024-12-09 04:59:01.817409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.302 [2024-12-09 04:59:01.817419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:48.302 [2024-12-09 04:59:01.820289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.302 [2024-12-09 04:59:01.820418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.302 [2024-12-09 04:59:01.820432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 [2024-12-09 04:59:02.358644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 Malloc0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 Delay0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 [2024-12-09 04:59:02.489966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.564 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:48.825 [2024-12-09 04:59:02.684660] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:51.367 Initializing NVMe Controllers 00:05:51.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:51.367 controller IO queue size 128 less than required 00:05:51.367 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:51.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:51.367 Initialization complete. Launching workers. 00:05:51.367 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28620 00:05:51.367 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28677, failed to submit 66 00:05:51.367 success 28620, unsuccessful 57, failed 0 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:51.367 rmmod nvme_tcp 00:05:51.367 rmmod nvme_fabrics 00:05:51.367 rmmod nvme_keyring 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1319315 ']' 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1319315 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1319315 ']' 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1319315 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1319315 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1319315' 00:05:51.367 killing process with pid 1319315 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1319315 00:05:51.367 04:59:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1319315 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.627 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:54.174 00:05:54.174 real 0m14.010s 00:05:54.174 user 0m15.292s 00:05:54.174 sys 0m6.553s 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.174 ************************************ 00:05:54.174 END TEST nvmf_abort 00:05:54.174 ************************************ 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.174 ************************************ 00:05:54.174 START TEST nvmf_ns_hotplug_stress 00:05:54.174 ************************************ 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:54.174 * Looking for test storage... 00:05:54.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.174 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.175 --rc genhtml_branch_coverage=1 00:05:54.175 --rc genhtml_function_coverage=1 00:05:54.175 --rc genhtml_legend=1 00:05:54.175 --rc geninfo_all_blocks=1 00:05:54.175 --rc geninfo_unexecuted_blocks=1 00:05:54.175 00:05:54.175 ' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.175 --rc genhtml_branch_coverage=1 00:05:54.175 --rc genhtml_function_coverage=1 00:05:54.175 --rc genhtml_legend=1 00:05:54.175 --rc geninfo_all_blocks=1 00:05:54.175 --rc geninfo_unexecuted_blocks=1 00:05:54.175 00:05:54.175 ' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.175 --rc genhtml_branch_coverage=1 00:05:54.175 --rc genhtml_function_coverage=1 00:05:54.175 --rc genhtml_legend=1 00:05:54.175 --rc geninfo_all_blocks=1 00:05:54.175 --rc geninfo_unexecuted_blocks=1 00:05:54.175 00:05:54.175 ' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.175 --rc genhtml_branch_coverage=1 00:05:54.175 --rc genhtml_function_coverage=1 00:05:54.175 --rc genhtml_legend=1 00:05:54.175 --rc geninfo_all_blocks=1 00:05:54.175 --rc geninfo_unexecuted_blocks=1 00:05:54.175 00:05:54.175 ' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:54.175 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.175 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.176 04:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.317 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:02.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:02.318 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:02.318 Found net devices under 0000:31:00.0: cvl_0_0 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:02.318 Found net devices under 0000:31:00.1: cvl_0_1 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:02.318 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:02.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:02.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:06:02.319 00:06:02.319 --- 10.0.0.2 ping statistics --- 00:06:02.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.319 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:02.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:02.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:06:02.319 00:06:02.319 --- 10.0.0.1 ping statistics --- 00:06:02.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.319 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1324390 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1324390 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1324390 ']' 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.319 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.319 [2024-12-09 04:59:15.719193] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:02.319 [2024-12-09 04:59:15.719324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.319 [2024-12-09 04:59:15.883883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.319 [2024-12-09 04:59:16.009295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:02.319 [2024-12-09 04:59:16.009359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:02.319 [2024-12-09 04:59:16.009373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.319 [2024-12-09 04:59:16.009386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.319 [2024-12-09 04:59:16.009397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:02.319 [2024-12-09 04:59:16.012100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.319 [2024-12-09 04:59:16.012320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.320 [2024-12-09 04:59:16.012338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:02.584 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:02.845 [2024-12-09 04:59:16.726404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.845 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:03.106 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:03.367 [2024-12-09 04:59:17.136161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.367 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:03.628 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:03.628 Malloc0 00:06:03.628 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:03.889 Delay0 00:06:03.889 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.150 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:04.150 NULL1 00:06:04.150 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:04.410 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:04.411 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1324774 00:06:04.411 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:04.411 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.671 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.671 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:04.671 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:04.933 true 00:06:04.933 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:04.933 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.193 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.455 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:05.455 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:05.455 true 00:06:05.455 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:05.455 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.716 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.977 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:05.977 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:05.977 true 00:06:05.977 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:05.977 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.237 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.498 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:06.498 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:06.498 true 00:06:06.498 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:06.498 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.784 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.043 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:07.044 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:07.044 true 00:06:07.044 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:07.044 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.304 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.564 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:07.564 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:07.564 true 00:06:07.823 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:07.823 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.823 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.085 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:08.085 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:08.345 true 00:06:08.345 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:08.345 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.345 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.605 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:08.605 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:08.867 true 00:06:08.867 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:08.867 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.867 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.126 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:09.127 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:09.387 true 00:06:09.387 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:09.387 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.648 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.648 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:09.648 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:09.909 true 00:06:09.909 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:09.909 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.171 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.171 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:10.171 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:10.431 true 00:06:10.431 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:10.431 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.690 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.690 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:10.691 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:10.950 true 00:06:10.950 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:10.950 04:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.210 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.471 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:11.471 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:11.471 true 00:06:11.471 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:11.471 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.731 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.991 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:11.991 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:11.991 true 00:06:11.991 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:11.991 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.251 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.511 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:12.511 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:12.511 true 00:06:12.771 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:12.771 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.771 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.032 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:13.032 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:13.294 true 00:06:13.294 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:13.294 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.294 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.555 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:13.555 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:13.815 true 00:06:13.815 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:13.815 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.076 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.076 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:14.076 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:14.336 true 00:06:14.336 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:14.336 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.598 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.859 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:14.859 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:14.859 true 00:06:14.859 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:14.859 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.119 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.378 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:15.378 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:15.378 true 00:06:15.378 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:15.378 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.638 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.897 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:15.897 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:15.897 true 00:06:15.897 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:15.898 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.157 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.417 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:16.417 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:16.677 true 00:06:16.677 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:16.677 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.677 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.995 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:16.995 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:16.995 true 00:06:17.265 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:17.265 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.265 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.524 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:17.524 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:17.784 true 00:06:17.784 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:17.784 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.784 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.045 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:18.045 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:18.305 true 00:06:18.305 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:18.305 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.565 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.565 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:18.565 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:18.825 true 00:06:18.825 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:18.825 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.085 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.085 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:19.085 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:19.345 true 00:06:19.345 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:19.345 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.604 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.864 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:19.864 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:19.864 true 00:06:19.864 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:19.864 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.124 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.384 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:20.384 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:20.384 true 00:06:20.384 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:20.384 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.644 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.905 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:20.905 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:20.905 true 00:06:21.165 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:21.165 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.165 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.426 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:21.426 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:21.685 true 00:06:21.685 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:21.685 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.685 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.945 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:21.946 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:22.206 true 00:06:22.206 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:22.206 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.468 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.468 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:22.468 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:22.728 true 00:06:22.728 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:22.728 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.988 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.988 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:22.988 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:23.249 true 00:06:23.249 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:23.249 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.531 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.791 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:23.791 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:23.791 true 00:06:23.791 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:23.791 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.051 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.311 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:24.311 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:24.311 true 00:06:24.311 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:24.311 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.570 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.830 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:24.830 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:24.830 true 00:06:24.830 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:24.830 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.089 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.349 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:25.349 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:25.609 true 00:06:25.609 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:25.609 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.609 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.870 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:25.870 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:26.130 true 00:06:26.130 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:26.130 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.390 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.390 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:26.390 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:26.650 true 00:06:26.650 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:26.650 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.910 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.910 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:26.910 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:27.170 true 00:06:27.170 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:27.170 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.431 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.691 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:27.691 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:27.691 true 00:06:27.691 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:27.691 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.952 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.212 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:28.212 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:28.212 true 00:06:28.212 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:28.212 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.474 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.734 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:28.734 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:28.734 true 00:06:28.734 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:28.734 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.995 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.256 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:29.256 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:29.518 true 00:06:29.518 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:29.518 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.518 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.780 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:29.780 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:30.042 true 00:06:30.042 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:30.042 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.042 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.304 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:30.304 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:30.565 true 00:06:30.565 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:30.565 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.825 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.825 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:30.825 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:31.085 true 00:06:31.085 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:31.085 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.346 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.346 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:31.346 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:31.605 true 00:06:31.605 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:31.606 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.866 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.126 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:32.126 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:32.126 true 00:06:32.386 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:32.387 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.387 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.647 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:32.647 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:32.908 true 00:06:32.908 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:32.908 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.908 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.169 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:33.169 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:33.431 true 00:06:33.431 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:33.431 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.431 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.693 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:33.693 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:33.953 true 00:06:33.953 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:33.953 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.215 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.215 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:34.215 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:34.476 true 00:06:34.476 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:34.476 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.737 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.737 Initializing NVMe Controllers 00:06:34.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.737 Controller IO queue size 128, less than required. 00:06:34.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:34.737 Initialization complete. Launching workers. 00:06:34.737 ======================================================== 00:06:34.737 Latency(us) 00:06:34.737 Device Information : IOPS MiB/s Average min max 00:06:34.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27733.74 13.54 4615.33 1275.57 43432.97 00:06:34.737 ======================================================== 00:06:34.737 Total : 27733.74 13.54 4615.33 1275.57 43432.97 00:06:34.737 00:06:34.737 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:34.737 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:34.996 true 00:06:34.996 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1324774 00:06:34.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1324774) - No such process 00:06:34.996 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1324774 00:06:34.996 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.257 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:35.518 null0 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.518 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:35.778 null1 00:06:35.778 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.778 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.778 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:35.778 null2 00:06:36.040 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.040 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.041 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:36.041 null3 00:06:36.041 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.041 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.041 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:36.301 null4 00:06:36.301 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.301 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.301 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:36.561 null5 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:36.561 null6 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.561 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:36.823 null7 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.823 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1331429 1331431 1331434 1331437 1331440 1331443 1331446 1331448 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.824 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.085 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.085 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.346 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.347 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.608 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.609 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.869 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.129 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.389 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.389 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.390 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.651 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.652 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.912 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.173 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.173 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.173 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.173 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.173 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.432 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.691 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.692 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.952 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.212 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.212 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.212 04:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.212 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.470 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.731 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.989 rmmod nvme_tcp 00:06:40.989 rmmod nvme_fabrics 00:06:40.989 rmmod nvme_keyring 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1324390 ']' 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1324390 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1324390 ']' 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1324390 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324390 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324390' 00:06:40.989 killing process with pid 1324390 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1324390 00:06:40.989 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1324390 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.560 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.561 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.561 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.561 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.101 00:06:44.101 real 0m49.798s 00:06:44.101 user 3m21.788s 00:06:44.101 sys 0m17.587s 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:44.101 ************************************ 00:06:44.101 END TEST nvmf_ns_hotplug_stress 00:06:44.101 ************************************ 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.101 ************************************ 00:06:44.101 START TEST nvmf_delete_subsystem 00:06:44.101 ************************************ 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:44.101 * Looking for test storage... 00:06:44.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.101 --rc genhtml_branch_coverage=1 00:06:44.101 --rc genhtml_function_coverage=1 00:06:44.101 --rc genhtml_legend=1 00:06:44.101 --rc geninfo_all_blocks=1 00:06:44.101 --rc geninfo_unexecuted_blocks=1 00:06:44.101 00:06:44.101 ' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.101 --rc genhtml_branch_coverage=1 00:06:44.101 --rc genhtml_function_coverage=1 00:06:44.101 --rc genhtml_legend=1 00:06:44.101 --rc geninfo_all_blocks=1 00:06:44.101 --rc geninfo_unexecuted_blocks=1 00:06:44.101 00:06:44.101 ' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.101 --rc genhtml_branch_coverage=1 00:06:44.101 --rc genhtml_function_coverage=1 00:06:44.101 --rc genhtml_legend=1 00:06:44.101 --rc geninfo_all_blocks=1 00:06:44.101 --rc geninfo_unexecuted_blocks=1 00:06:44.101 00:06:44.101 ' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.101 --rc genhtml_branch_coverage=1 00:06:44.101 --rc genhtml_function_coverage=1 00:06:44.101 --rc genhtml_legend=1 00:06:44.101 --rc geninfo_all_blocks=1 00:06:44.101 --rc geninfo_unexecuted_blocks=1 00:06:44.101 00:06:44.101 ' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.101 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.102 04:59:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:52.237 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:52.237 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:52.237 Found net devices under 0000:31:00.0: cvl_0_0 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:52.237 Found net devices under 0000:31:00.1: cvl_0_1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.237 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:06:52.238 00:06:52.238 --- 10.0.0.2 ping statistics --- 00:06:52.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.238 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:06:52.238 00:06:52.238 --- 10.0.0.1 ping statistics --- 00:06:52.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.238 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1336967 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1336967 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1336967 ']' 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.238 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.238 [2024-12-09 05:00:05.567602] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:52.238 [2024-12-09 05:00:05.567728] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.238 [2024-12-09 05:00:05.731090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.238 [2024-12-09 05:00:05.856563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.238 [2024-12-09 05:00:05.856627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.238 [2024-12-09 05:00:05.856642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.238 [2024-12-09 05:00:05.856657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.238 [2024-12-09 05:00:05.856667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.238 [2024-12-09 05:00:05.859184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.238 [2024-12-09 05:00:05.859210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.498 [2024-12-09 05:00:06.395379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.498 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 [2024-12-09 05:00:06.420954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 NULL1 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 Delay0 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1337192 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:52.499 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:52.760 [2024-12-09 05:00:06.589773] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:54.670 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.670 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.670 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 starting I/O failed: -6 00:06:54.930 starting I/O failed: -6 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Read completed with error (sct=0, sc=8) 00:06:54.930 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 [2024-12-09 05:00:08.728224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 starting I/O failed: -6 00:06:54.931 [2024-12-09 05:00:08.732200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Write completed with error (sct=0, sc=8) 00:06:54.931 Read completed with error (sct=0, sc=8) 00:06:54.931 [2024-12-09 05:00:08.733192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030a00 is same with the state(6) to be set 00:06:55.868 [2024-12-09 05:00:09.704332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Write completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Write completed with error (sct=0, sc=8) 00:06:55.868 Write completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Write completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Read completed with error (sct=0, sc=8) 00:06:55.868 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 [2024-12-09 05:00:09.731783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 [2024-12-09 05:00:09.732575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 [2024-12-09 05:00:09.733587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 Write completed with error (sct=0, sc=8) 00:06:55.869 05:00:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.869 05:00:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:55.869 Read completed with error (sct=0, sc=8) 00:06:55.869 [2024-12-09 05:00:09.737604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:06:55.869 05:00:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1337192 00:06:55.869 05:00:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.869 Initializing NVMe Controllers 00:06:55.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.869 Controller IO queue size 128, less than required. 00:06:55.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.869 Initialization complete. Launching workers. 00:06:55.869 ======================================================== 00:06:55.869 Latency(us) 00:06:55.869 Device Information : IOPS MiB/s Average min max 00:06:55.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.06 0.09 887795.86 571.84 1008725.44 00:06:55.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.67 0.08 932091.43 1014.16 1012568.38 00:06:55.869 ======================================================== 00:06:55.869 Total : 329.73 0.16 908574.02 571.84 1012568.38 00:06:55.869 00:06:55.869 [2024-12-09 05:00:09.739181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:06:55.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1337192 00:06:56.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1337192) - No such process 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1337192 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1337192 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1337192 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.437 [2024-12-09 05:00:10.269435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1338234 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:56.437 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.437 [2024-12-09 05:00:10.413349] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:57.005 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.005 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:57.005 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.576 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.577 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:57.577 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.837 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.837 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:57.837 05:00:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.406 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.406 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:58.406 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.976 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.977 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:58.977 05:00:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.547 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.547 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:06:59.548 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.808 Initializing NVMe Controllers 00:06:59.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:59.808 Controller IO queue size 128, less than required. 00:06:59.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:59.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:59.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:59.808 Initialization complete. Launching workers. 00:06:59.808 ======================================================== 00:06:59.808 Latency(us) 00:06:59.808 Device Information : IOPS MiB/s Average min max 00:06:59.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002119.85 1000121.66 1006080.58 00:06:59.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003472.68 1000571.09 1008613.54 00:06:59.808 ======================================================== 00:06:59.808 Total : 256.00 0.12 1002796.26 1000121.66 1008613.54 00:06:59.808 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1338234 00:07:00.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1338234) - No such process 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1338234 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:00.070 rmmod nvme_tcp 00:07:00.070 rmmod nvme_fabrics 00:07:00.070 rmmod nvme_keyring 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1336967 ']' 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1336967 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1336967 ']' 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1336967 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336967 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336967' 00:07:00.070 killing process with pid 1336967 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1336967 00:07:00.070 05:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1336967 00:07:00.642 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.643 05:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:03.193 00:07:03.193 real 0m18.957s 00:07:03.193 user 0m31.412s 00:07:03.193 sys 0m7.001s 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 ************************************ 00:07:03.193 END TEST nvmf_delete_subsystem 00:07:03.193 ************************************ 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 ************************************ 00:07:03.193 START TEST nvmf_host_management 00:07:03.193 ************************************ 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:03.193 * Looking for test storage... 00:07:03.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.193 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:03.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.194 --rc genhtml_branch_coverage=1 00:07:03.194 --rc genhtml_function_coverage=1 00:07:03.194 --rc genhtml_legend=1 00:07:03.194 --rc geninfo_all_blocks=1 00:07:03.194 --rc geninfo_unexecuted_blocks=1 00:07:03.194 00:07:03.194 ' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:03.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.194 --rc genhtml_branch_coverage=1 00:07:03.194 --rc genhtml_function_coverage=1 00:07:03.194 --rc genhtml_legend=1 00:07:03.194 --rc geninfo_all_blocks=1 00:07:03.194 --rc geninfo_unexecuted_blocks=1 00:07:03.194 00:07:03.194 ' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:03.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.194 --rc genhtml_branch_coverage=1 00:07:03.194 --rc genhtml_function_coverage=1 00:07:03.194 --rc genhtml_legend=1 00:07:03.194 --rc geninfo_all_blocks=1 00:07:03.194 --rc geninfo_unexecuted_blocks=1 00:07:03.194 00:07:03.194 ' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:03.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.194 --rc genhtml_branch_coverage=1 00:07:03.194 --rc genhtml_function_coverage=1 00:07:03.194 --rc genhtml_legend=1 00:07:03.194 --rc geninfo_all_blocks=1 00:07:03.194 --rc geninfo_unexecuted_blocks=1 00:07:03.194 00:07:03.194 ' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.194 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:03.195 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.339 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:11.340 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:11.340 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:11.340 Found net devices under 0000:31:00.0: cvl_0_0 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:11.340 Found net devices under 0000:31:00.1: cvl_0_1 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.340 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:07:11.341 00:07:11.341 --- 10.0.0.2 ping statistics --- 00:07:11.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.341 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:11.341 00:07:11.341 --- 10.0.0.1 ping statistics --- 00:07:11.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.341 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1343518 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1343518 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1343518 ']' 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.341 05:00:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.341 [2024-12-09 05:00:24.612435] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:11.341 [2024-12-09 05:00:24.612560] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.341 [2024-12-09 05:00:24.779534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.341 [2024-12-09 05:00:24.912343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.341 [2024-12-09 05:00:24.912411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.341 [2024-12-09 05:00:24.912426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.341 [2024-12-09 05:00:24.912439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.341 [2024-12-09 05:00:24.912450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.341 [2024-12-09 05:00:24.915337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.341 [2024-12-09 05:00:24.915470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.341 [2024-12-09 05:00:24.915577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.341 [2024-12-09 05:00:24.915605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 [2024-12-09 05:00:25.447083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 Malloc0 00:07:11.602 [2024-12-09 05:00:25.573585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.602 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1343741 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1343741 /var/tmp/bdevperf.sock 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1343741 ']' 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.862 { 00:07:11.862 "params": { 00:07:11.862 "name": "Nvme$subsystem", 00:07:11.862 "trtype": "$TEST_TRANSPORT", 00:07:11.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.862 "adrfam": "ipv4", 00:07:11.862 "trsvcid": "$NVMF_PORT", 00:07:11.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.862 "hdgst": ${hdgst:-false}, 00:07:11.862 "ddgst": ${ddgst:-false} 00:07:11.862 }, 00:07:11.862 "method": "bdev_nvme_attach_controller" 00:07:11.862 } 00:07:11.862 EOF 00:07:11.862 )") 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:11.862 05:00:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.862 "params": { 00:07:11.862 "name": "Nvme0", 00:07:11.862 "trtype": "tcp", 00:07:11.862 "traddr": "10.0.0.2", 00:07:11.862 "adrfam": "ipv4", 00:07:11.862 "trsvcid": "4420", 00:07:11.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:11.862 "hdgst": false, 00:07:11.862 "ddgst": false 00:07:11.862 }, 00:07:11.862 "method": "bdev_nvme_attach_controller" 00:07:11.862 }' 00:07:11.862 [2024-12-09 05:00:25.720891] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:11.862 [2024-12-09 05:00:25.721018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343741 ] 00:07:12.122 [2024-12-09 05:00:25.879360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.122 [2024-12-09 05:00:26.005770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.693 Running I/O for 10 seconds... 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:12.693 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.956 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.956 [2024-12-09 05:00:26.939718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.956 [2024-12-09 05:00:26.939803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.956 [2024-12-09 05:00:26.939850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.956 [2024-12-09 05:00:26.939863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.956 [2024-12-09 05:00:26.939875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.956 [2024-12-09 05:00:26.939890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.956 [2024-12-09 05:00:26.939911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.956 [2024-12-09 05:00:26.939923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.956 [2024-12-09 05:00:26.939936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.939990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.956 [2024-12-09 05:00:26.940238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:12.957 [2024-12-09 05:00:26.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.957 [2024-12-09 05:00:26.941503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.957 [2024-12-09 05:00:26.941517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.941978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.941990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.958 [2024-12-09 05:00:26.942484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.958 [2024-12-09 05:00:26.942495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.959 [2024-12-09 05:00:26.942680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.959 [2024-12-09 05:00:26.942695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000394700 is same with the state(6) to be set 00:07:12.959 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.959 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.959 [2024-12-09 05:00:26.944277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:12.959 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.959 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.959 task offset: 65536 on job bdev=Nvme0n1 fails 00:07:12.959 00:07:12.959 Latency(us) 00:07:12.959 [2024-12-09T04:00:26.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.959 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:12.959 Verification LBA range: start 0x0 length 0x400 00:07:12.959 Nvme0n1 : 0.44 1175.69 73.48 146.96 0.00 46963.70 6062.08 38884.69 00:07:12.959 [2024-12-09T04:00:26.956Z] =================================================================================================================== 00:07:12.959 [2024-12-09T04:00:26.956Z] Total : 1175.69 73.48 146.96 0.00 46963.70 6062.08 38884.69 00:07:13.220 [2024-12-09 05:00:26.949146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.220 [2024-12-09 05:00:26.949210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:07:13.220 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.220 05:00:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:13.220 [2024-12-09 05:00:27.005763] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1343741 00:07:14.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1343741) - No such process 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:14.160 { 00:07:14.160 "params": { 00:07:14.160 "name": "Nvme$subsystem", 00:07:14.160 "trtype": "$TEST_TRANSPORT", 00:07:14.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.160 "adrfam": "ipv4", 00:07:14.160 "trsvcid": "$NVMF_PORT", 00:07:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.160 "hdgst": ${hdgst:-false}, 00:07:14.160 "ddgst": ${ddgst:-false} 00:07:14.160 }, 00:07:14.160 "method": "bdev_nvme_attach_controller" 00:07:14.160 } 00:07:14.160 EOF 00:07:14.160 )") 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:14.160 05:00:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:14.160 "params": { 00:07:14.160 "name": "Nvme0", 00:07:14.160 "trtype": "tcp", 00:07:14.160 "traddr": "10.0.0.2", 00:07:14.160 "adrfam": "ipv4", 00:07:14.160 "trsvcid": "4420", 00:07:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.160 "hdgst": false, 00:07:14.160 "ddgst": false 00:07:14.160 }, 00:07:14.160 "method": "bdev_nvme_attach_controller" 00:07:14.160 }' 00:07:14.160 [2024-12-09 05:00:28.044264] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:14.160 [2024-12-09 05:00:28.044373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344256 ] 00:07:14.420 [2024-12-09 05:00:28.185327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.420 [2024-12-09 05:00:28.283773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.680 Running I/O for 1 seconds... 00:07:16.081 1291.00 IOPS, 80.69 MiB/s 00:07:16.081 Latency(us) 00:07:16.081 [2024-12-09T04:00:30.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.081 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.081 Verification LBA range: start 0x0 length 0x400 00:07:16.081 Nvme0n1 : 1.01 1342.56 83.91 0.00 0.00 46649.60 2621.44 35826.35 00:07:16.081 [2024-12-09T04:00:30.078Z] =================================================================================================================== 00:07:16.081 [2024-12-09T04:00:30.078Z] Total : 1342.56 83.91 0.00 0.00 46649.60 2621.44 35826.35 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.341 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.341 rmmod nvme_tcp 00:07:16.341 rmmod nvme_fabrics 00:07:16.600 rmmod nvme_keyring 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1343518 ']' 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1343518 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1343518 ']' 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1343518 00:07:16.600 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1343518 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1343518' 00:07:16.601 killing process with pid 1343518 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1343518 00:07:16.601 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1343518 00:07:17.169 [2024-12-09 05:00:31.040754] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.169 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:19.709 00:07:19.709 real 0m16.504s 00:07:19.709 user 0m30.205s 00:07:19.709 sys 0m7.270s 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 ************************************ 00:07:19.709 END TEST nvmf_host_management 00:07:19.709 ************************************ 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.709 ************************************ 00:07:19.709 START TEST nvmf_lvol 00:07:19.709 ************************************ 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:19.709 * Looking for test storage... 00:07:19.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.709 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.710 --rc genhtml_branch_coverage=1 00:07:19.710 --rc genhtml_function_coverage=1 00:07:19.710 --rc genhtml_legend=1 00:07:19.710 --rc geninfo_all_blocks=1 00:07:19.710 --rc geninfo_unexecuted_blocks=1 00:07:19.710 00:07:19.710 ' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.710 --rc genhtml_branch_coverage=1 00:07:19.710 --rc genhtml_function_coverage=1 00:07:19.710 --rc genhtml_legend=1 00:07:19.710 --rc geninfo_all_blocks=1 00:07:19.710 --rc geninfo_unexecuted_blocks=1 00:07:19.710 00:07:19.710 ' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.710 --rc genhtml_branch_coverage=1 00:07:19.710 --rc genhtml_function_coverage=1 00:07:19.710 --rc genhtml_legend=1 00:07:19.710 --rc geninfo_all_blocks=1 00:07:19.710 --rc geninfo_unexecuted_blocks=1 00:07:19.710 00:07:19.710 ' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.710 --rc genhtml_branch_coverage=1 00:07:19.710 --rc genhtml_function_coverage=1 00:07:19.710 --rc genhtml_legend=1 00:07:19.710 --rc geninfo_all_blocks=1 00:07:19.710 --rc geninfo_unexecuted_blocks=1 00:07:19.710 00:07:19.710 ' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.710 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:27.848 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:27.848 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:27.848 Found net devices under 0000:31:00.0: cvl_0_0 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.848 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:27.849 Found net devices under 0000:31:00.1: cvl_0_1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:27.849 05:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:27.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:07:27.849 00:07:27.849 --- 10.0.0.2 ping statistics --- 00:07:27.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.849 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:27.849 00:07:27.849 --- 10.0.0.1 ping statistics --- 00:07:27.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.849 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1349031 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1349031 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1349031 ']' 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.849 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.849 [2024-12-09 05:00:41.202999] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:27.849 [2024-12-09 05:00:41.203128] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.849 [2024-12-09 05:00:41.368263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.849 [2024-12-09 05:00:41.495956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.849 [2024-12-09 05:00:41.496024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.849 [2024-12-09 05:00:41.496037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.849 [2024-12-09 05:00:41.496050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.849 [2024-12-09 05:00:41.496064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.849 [2024-12-09 05:00:41.498811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.849 [2024-12-09 05:00:41.498942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.849 [2024-12-09 05:00:41.498950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.111 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.111 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:28.111 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.111 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.111 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.111 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.111 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:28.372 [2024-12-09 05:00:42.198863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.372 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.634 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:28.634 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.895 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:28.895 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:29.156 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:29.417 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3a3ad7b1-e8b1-469d-a34f-d1fd7ede7b1b 00:07:29.417 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a3ad7b1-e8b1-469d-a34f-d1fd7ede7b1b lvol 20 00:07:29.417 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=11352940-3000-48e8-8769-b815b9e25c09 00:07:29.417 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.678 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11352940-3000-48e8-8769-b815b9e25c09 00:07:29.959 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:29.959 [2024-12-09 05:00:43.944993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.220 05:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.220 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1349688 00:07:30.220 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:30.220 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:31.602 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 11352940-3000-48e8-8769-b815b9e25c09 MY_SNAPSHOT 00:07:31.602 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d7e0cdc8-db90-4609-904a-f941c8d18156 00:07:31.602 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 11352940-3000-48e8-8769-b815b9e25c09 30 00:07:31.860 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d7e0cdc8-db90-4609-904a-f941c8d18156 MY_CLONE 00:07:31.860 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e00a6719-060b-4a2a-b9fc-832ba19488a0 00:07:31.860 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e00a6719-060b-4a2a-b9fc-832ba19488a0 00:07:32.427 05:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1349688 00:07:40.567 Initializing NVMe Controllers 00:07:40.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:40.567 Controller IO queue size 128, less than required. 00:07:40.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:40.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:40.567 Initialization complete. Launching workers. 00:07:40.567 ======================================================== 00:07:40.567 Latency(us) 00:07:40.567 Device Information : IOPS MiB/s Average min max 00:07:40.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16199.20 63.28 7904.57 346.48 119321.35 00:07:40.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15210.70 59.42 8417.88 2406.05 100931.63 00:07:40.567 ======================================================== 00:07:40.567 Total : 31409.90 122.69 8153.15 346.48 119321.35 00:07:40.567 00:07:40.826 05:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.826 05:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11352940-3000-48e8-8769-b815b9e25c09 00:07:41.086 05:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a3ad7b1-e8b1-469d-a34f-d1fd7ede7b1b 00:07:41.345 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.346 rmmod nvme_tcp 00:07:41.346 rmmod nvme_fabrics 00:07:41.346 rmmod nvme_keyring 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1349031 ']' 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1349031 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1349031 ']' 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1349031 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1349031 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1349031' 00:07:41.346 killing process with pid 1349031 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1349031 00:07:41.346 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1349031 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.286 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.286 05:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.286 05:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.286 05:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.286 05:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.286 05:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.199 00:07:44.199 real 0m24.795s 00:07:44.199 user 1m6.511s 00:07:44.199 sys 0m8.695s 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:44.199 ************************************ 00:07:44.199 END TEST nvmf_lvol 00:07:44.199 ************************************ 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.199 ************************************ 00:07:44.199 START TEST nvmf_lvs_grow 00:07:44.199 ************************************ 00:07:44.199 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:44.460 * Looking for test storage... 00:07:44.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.460 --rc genhtml_branch_coverage=1 00:07:44.460 --rc genhtml_function_coverage=1 00:07:44.460 --rc genhtml_legend=1 00:07:44.460 --rc geninfo_all_blocks=1 00:07:44.460 --rc geninfo_unexecuted_blocks=1 00:07:44.460 00:07:44.460 ' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.460 --rc genhtml_branch_coverage=1 00:07:44.460 --rc genhtml_function_coverage=1 00:07:44.460 --rc genhtml_legend=1 00:07:44.460 --rc geninfo_all_blocks=1 00:07:44.460 --rc geninfo_unexecuted_blocks=1 00:07:44.460 00:07:44.460 ' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.460 --rc genhtml_branch_coverage=1 00:07:44.460 --rc genhtml_function_coverage=1 00:07:44.460 --rc genhtml_legend=1 00:07:44.460 --rc geninfo_all_blocks=1 00:07:44.460 --rc geninfo_unexecuted_blocks=1 00:07:44.460 00:07:44.460 ' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.460 --rc genhtml_branch_coverage=1 00:07:44.460 --rc genhtml_function_coverage=1 00:07:44.460 --rc genhtml_legend=1 00:07:44.460 --rc geninfo_all_blocks=1 00:07:44.460 --rc geninfo_unexecuted_blocks=1 00:07:44.460 00:07:44.460 ' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.460 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.461 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.631 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.631 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:07:52.631 00:07:52.631 --- 10.0.0.2 ping statistics --- 00:07:52.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.631 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:07:52.631 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:07:52.632 00:07:52.632 --- 10.0.0.1 ping statistics --- 00:07:52.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.632 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.632 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1356410 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1356410 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1356410 ']' 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.632 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.632 [2024-12-09 05:01:06.125147] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:52.632 [2024-12-09 05:01:06.125267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.632 [2024-12-09 05:01:06.288438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.632 [2024-12-09 05:01:06.410086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.632 [2024-12-09 05:01:06.410155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.632 [2024-12-09 05:01:06.410169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.632 [2024-12-09 05:01:06.410182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.632 [2024-12-09 05:01:06.410195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.632 [2024-12-09 05:01:06.411672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.201 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.201 [2024-12-09 05:01:07.128310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.201 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:53.201 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.201 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.201 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.461 ************************************ 00:07:53.461 START TEST lvs_grow_clean 00:07:53.461 ************************************ 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:53.461 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.721 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=40db787e-ccee-464d-9282-697d7edcaeac 00:07:53.721 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:07:53.721 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:53.981 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:53.981 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:53.981 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40db787e-ccee-464d-9282-697d7edcaeac lvol 150 00:07:54.253 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d477c1c9-bf3b-4cac-8f76-5a29269292f5 00:07:54.253 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.253 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.253 [2024-12-09 05:01:08.186304] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.253 [2024-12-09 05:01:08.186417] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.253 true 00:07:54.253 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:07:54.253 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:54.569 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:54.569 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.886 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d477c1c9-bf3b-4cac-8f76-5a29269292f5 00:07:54.886 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.168 [2024-12-09 05:01:08.912793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.168 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1356991 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1356991 /var/tmp/bdevperf.sock 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1356991 ']' 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.168 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:55.450 [2024-12-09 05:01:09.205324] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:55.450 [2024-12-09 05:01:09.205454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356991 ] 00:07:55.450 [2024-12-09 05:01:09.328409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.450 [2024-12-09 05:01:09.430038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.109 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.109 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:56.109 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:56.454 Nvme0n1 00:07:56.454 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:56.715 [ 00:07:56.715 { 00:07:56.715 "name": "Nvme0n1", 00:07:56.715 "aliases": [ 00:07:56.715 "d477c1c9-bf3b-4cac-8f76-5a29269292f5" 00:07:56.715 ], 00:07:56.715 "product_name": "NVMe disk", 00:07:56.715 "block_size": 4096, 00:07:56.715 "num_blocks": 38912, 00:07:56.715 "uuid": "d477c1c9-bf3b-4cac-8f76-5a29269292f5", 00:07:56.715 "numa_id": 0, 00:07:56.715 "assigned_rate_limits": { 00:07:56.715 "rw_ios_per_sec": 0, 00:07:56.715 "rw_mbytes_per_sec": 0, 00:07:56.715 "r_mbytes_per_sec": 0, 00:07:56.715 "w_mbytes_per_sec": 0 00:07:56.715 }, 00:07:56.715 "claimed": false, 00:07:56.715 "zoned": false, 00:07:56.715 "supported_io_types": { 00:07:56.715 "read": true, 00:07:56.715 "write": true, 00:07:56.715 "unmap": true, 00:07:56.715 "flush": true, 00:07:56.715 "reset": true, 00:07:56.715 "nvme_admin": true, 00:07:56.715 "nvme_io": true, 00:07:56.715 "nvme_io_md": false, 00:07:56.715 "write_zeroes": true, 00:07:56.715 "zcopy": false, 00:07:56.715 "get_zone_info": false, 00:07:56.715 "zone_management": false, 00:07:56.715 "zone_append": false, 00:07:56.715 "compare": true, 00:07:56.715 "compare_and_write": true, 00:07:56.715 "abort": true, 00:07:56.715 "seek_hole": false, 00:07:56.715 "seek_data": false, 00:07:56.715 "copy": true, 00:07:56.715 "nvme_iov_md": false 00:07:56.715 }, 00:07:56.715 "memory_domains": [ 00:07:56.715 { 00:07:56.715 "dma_device_id": "system", 00:07:56.715 "dma_device_type": 1 00:07:56.715 } 00:07:56.715 ], 00:07:56.715 "driver_specific": { 00:07:56.715 "nvme": [ 00:07:56.715 { 00:07:56.715 "trid": { 00:07:56.715 "trtype": "TCP", 00:07:56.715 "adrfam": "IPv4", 00:07:56.715 "traddr": "10.0.0.2", 00:07:56.715 "trsvcid": "4420", 00:07:56.715 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:56.715 }, 00:07:56.715 "ctrlr_data": { 00:07:56.715 "cntlid": 1, 00:07:56.715 "vendor_id": "0x8086", 00:07:56.715 "model_number": "SPDK bdev Controller", 00:07:56.715 "serial_number": "SPDK0", 00:07:56.715 "firmware_revision": "25.01", 00:07:56.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.715 "oacs": { 00:07:56.715 "security": 0, 00:07:56.715 "format": 0, 00:07:56.715 "firmware": 0, 00:07:56.715 "ns_manage": 0 00:07:56.715 }, 00:07:56.715 "multi_ctrlr": true, 00:07:56.715 "ana_reporting": false 00:07:56.715 }, 00:07:56.715 "vs": { 00:07:56.715 "nvme_version": "1.3" 00:07:56.715 }, 00:07:56.715 "ns_data": { 00:07:56.715 "id": 1, 00:07:56.715 "can_share": true 00:07:56.715 } 00:07:56.715 } 00:07:56.715 ], 00:07:56.715 "mp_policy": "active_passive" 00:07:56.715 } 00:07:56.715 } 00:07:56.715 ] 00:07:56.715 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1357185 00:07:56.715 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:56.715 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.715 Running I/O for 10 seconds... 00:07:57.655 Latency(us) 00:07:57.655 [2024-12-09T04:01:11.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.655 Nvme0n1 : 1.00 21244.00 82.98 0.00 0.00 0.00 0.00 0.00 00:07:57.655 [2024-12-09T04:01:11.652Z] =================================================================================================================== 00:07:57.655 [2024-12-09T04:01:11.652Z] Total : 21244.00 82.98 0.00 0.00 0.00 0.00 0.00 00:07:57.655 00:07:58.607 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 40db787e-ccee-464d-9282-697d7edcaeac 00:07:58.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.607 Nvme0n1 : 2.00 22108.50 86.36 0.00 0.00 0.00 0.00 0.00 00:07:58.607 [2024-12-09T04:01:12.604Z] =================================================================================================================== 00:07:58.607 [2024-12-09T04:01:12.604Z] Total : 22108.50 86.36 0.00 0.00 0.00 0.00 0.00 00:07:58.607 00:07:58.867 true 00:07:58.867 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:07:58.867 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:58.867 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:58.867 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:58.867 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1357185 00:07:59.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.832 Nvme0n1 : 3.00 22419.00 87.57 0.00 0.00 0.00 0.00 0.00 00:07:59.832 [2024-12-09T04:01:13.829Z] =================================================================================================================== 00:07:59.832 [2024-12-09T04:01:13.829Z] Total : 22419.00 87.57 0.00 0.00 0.00 0.00 0.00 00:07:59.832 00:08:00.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.769 Nvme0n1 : 4.00 22588.00 88.23 0.00 0.00 0.00 0.00 0.00 00:08:00.769 [2024-12-09T04:01:14.766Z] =================================================================================================================== 00:08:00.769 [2024-12-09T04:01:14.766Z] Total : 22588.00 88.23 0.00 0.00 0.00 0.00 0.00 00:08:00.769 00:08:01.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.706 Nvme0n1 : 5.00 22703.80 88.69 0.00 0.00 0.00 0.00 0.00 00:08:01.706 [2024-12-09T04:01:15.703Z] =================================================================================================================== 00:08:01.707 [2024-12-09T04:01:15.704Z] Total : 22703.80 88.69 0.00 0.00 0.00 0.00 0.00 00:08:01.707 00:08:02.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.647 Nvme0n1 : 6.00 22780.50 88.99 0.00 0.00 0.00 0.00 0.00 00:08:02.647 [2024-12-09T04:01:16.644Z] =================================================================================================================== 00:08:02.647 [2024-12-09T04:01:16.644Z] Total : 22780.50 88.99 0.00 0.00 0.00 0.00 0.00 00:08:02.647 00:08:04.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.027 Nvme0n1 : 7.00 22835.86 89.20 0.00 0.00 0.00 0.00 0.00 00:08:04.027 [2024-12-09T04:01:18.024Z] =================================================================================================================== 00:08:04.027 [2024-12-09T04:01:18.024Z] Total : 22835.86 89.20 0.00 0.00 0.00 0.00 0.00 00:08:04.027 00:08:04.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.979 Nvme0n1 : 8.00 22881.50 89.38 0.00 0.00 0.00 0.00 0.00 00:08:04.979 [2024-12-09T04:01:18.976Z] =================================================================================================================== 00:08:04.979 [2024-12-09T04:01:18.976Z] Total : 22881.50 89.38 0.00 0.00 0.00 0.00 0.00 00:08:04.979 00:08:05.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.918 Nvme0n1 : 9.00 22916.78 89.52 0.00 0.00 0.00 0.00 0.00 00:08:05.918 [2024-12-09T04:01:19.915Z] =================================================================================================================== 00:08:05.918 [2024-12-09T04:01:19.915Z] Total : 22916.78 89.52 0.00 0.00 0.00 0.00 0.00 00:08:05.918 00:08:06.857 00:08:06.857 Latency(us) 00:08:06.857 [2024-12-09T04:01:20.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.857 Nvme0n1 : 10.00 22935.92 89.59 0.00 0.00 5577.76 3358.72 15291.73 00:08:06.857 [2024-12-09T04:01:20.854Z] =================================================================================================================== 00:08:06.857 [2024-12-09T04:01:20.854Z] Total : 22935.92 89.59 0.00 0.00 5577.76 3358.72 15291.73 00:08:06.857 { 00:08:06.857 "results": [ 00:08:06.857 { 00:08:06.857 "job": "Nvme0n1", 00:08:06.857 "core_mask": "0x2", 00:08:06.857 "workload": "randwrite", 00:08:06.857 "status": "finished", 00:08:06.857 "queue_depth": 128, 00:08:06.857 "io_size": 4096, 00:08:06.857 "runtime": 10.00178, 00:08:06.857 "iops": 22935.917406701607, 00:08:06.857 "mibps": 89.59342736992815, 00:08:06.857 "io_failed": 0, 00:08:06.857 "io_timeout": 0, 00:08:06.857 "avg_latency_us": 5577.7580266201685, 00:08:06.857 "min_latency_us": 3358.72, 00:08:06.858 "max_latency_us": 15291.733333333334 00:08:06.858 } 00:08:06.858 ], 00:08:06.858 "core_count": 1 00:08:06.858 } 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1356991 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1356991 ']' 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1356991 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356991 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356991' 00:08:06.858 killing process with pid 1356991 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1356991 00:08:06.858 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.858 00:08:06.858 Latency(us) 00:08:06.858 [2024-12-09T04:01:20.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.858 [2024-12-09T04:01:20.855Z] =================================================================================================================== 00:08:06.858 [2024-12-09T04:01:20.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.858 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1356991 00:08:07.428 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.428 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.688 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:07.688 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.948 [2024-12-09 05:01:21.858587] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.948 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:07.949 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:08.209 request: 00:08:08.209 { 00:08:08.209 "uuid": "40db787e-ccee-464d-9282-697d7edcaeac", 00:08:08.209 "method": "bdev_lvol_get_lvstores", 00:08:08.209 "req_id": 1 00:08:08.209 } 00:08:08.209 Got JSON-RPC error response 00:08:08.209 response: 00:08:08.209 { 00:08:08.209 "code": -19, 00:08:08.209 "message": "No such device" 00:08:08.209 } 00:08:08.209 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:08.209 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.209 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.209 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.209 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.469 aio_bdev 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d477c1c9-bf3b-4cac-8f76-5a29269292f5 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d477c1c9-bf3b-4cac-8f76-5a29269292f5 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.469 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d477c1c9-bf3b-4cac-8f76-5a29269292f5 -t 2000 00:08:08.729 [ 00:08:08.729 { 00:08:08.729 "name": "d477c1c9-bf3b-4cac-8f76-5a29269292f5", 00:08:08.729 "aliases": [ 00:08:08.729 "lvs/lvol" 00:08:08.729 ], 00:08:08.729 "product_name": "Logical Volume", 00:08:08.729 "block_size": 4096, 00:08:08.729 "num_blocks": 38912, 00:08:08.729 "uuid": "d477c1c9-bf3b-4cac-8f76-5a29269292f5", 00:08:08.729 "assigned_rate_limits": { 00:08:08.729 "rw_ios_per_sec": 0, 00:08:08.729 "rw_mbytes_per_sec": 0, 00:08:08.729 "r_mbytes_per_sec": 0, 00:08:08.729 "w_mbytes_per_sec": 0 00:08:08.729 }, 00:08:08.729 "claimed": false, 00:08:08.729 "zoned": false, 00:08:08.729 "supported_io_types": { 00:08:08.729 "read": true, 00:08:08.729 "write": true, 00:08:08.729 "unmap": true, 00:08:08.729 "flush": false, 00:08:08.729 "reset": true, 00:08:08.729 "nvme_admin": false, 00:08:08.729 "nvme_io": false, 00:08:08.729 "nvme_io_md": false, 00:08:08.729 "write_zeroes": true, 00:08:08.729 "zcopy": false, 00:08:08.729 "get_zone_info": false, 00:08:08.729 "zone_management": false, 00:08:08.729 "zone_append": false, 00:08:08.729 "compare": false, 00:08:08.729 "compare_and_write": false, 00:08:08.729 "abort": false, 00:08:08.729 "seek_hole": true, 00:08:08.729 "seek_data": true, 00:08:08.729 "copy": false, 00:08:08.729 "nvme_iov_md": false 00:08:08.729 }, 00:08:08.729 "driver_specific": { 00:08:08.729 "lvol": { 00:08:08.729 "lvol_store_uuid": "40db787e-ccee-464d-9282-697d7edcaeac", 00:08:08.729 "base_bdev": "aio_bdev", 00:08:08.729 "thin_provision": false, 00:08:08.729 "num_allocated_clusters": 38, 00:08:08.729 "snapshot": false, 00:08:08.729 "clone": false, 00:08:08.729 "esnap_clone": false 00:08:08.729 } 00:08:08.729 } 00:08:08.729 } 00:08:08.729 ] 00:08:08.729 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:08.729 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:08.729 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.989 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.989 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:08.989 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:08.989 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:08.989 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d477c1c9-bf3b-4cac-8f76-5a29269292f5 00:08:09.250 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40db787e-ccee-464d-9282-697d7edcaeac 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.510 00:08:09.510 real 0m16.220s 00:08:09.510 user 0m15.859s 00:08:09.510 sys 0m1.497s 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:09.510 ************************************ 00:08:09.510 END TEST lvs_grow_clean 00:08:09.510 ************************************ 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.510 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.771 ************************************ 00:08:09.771 START TEST lvs_grow_dirty 00:08:09.771 ************************************ 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.771 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.033 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:10.033 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:10.033 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.293 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.293 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.294 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 lvol 150 00:08:10.294 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:10.294 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.294 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:10.554 [2024-12-09 05:01:24.366532] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:10.554 [2024-12-09 05:01:24.366595] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:10.554 true 00:08:10.554 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:10.554 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:10.554 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.554 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.814 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:11.073 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.073 [2024-12-09 05:01:24.996497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.073 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1360252 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1360252 /var/tmp/bdevperf.sock 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1360252 ']' 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.333 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.333 [2024-12-09 05:01:25.248204] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:11.333 [2024-12-09 05:01:25.248311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360252 ] 00:08:11.593 [2024-12-09 05:01:25.379344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.593 [2024-12-09 05:01:25.452773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.164 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.164 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:12.164 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:12.424 Nvme0n1 00:08:12.424 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:12.683 [ 00:08:12.683 { 00:08:12.683 "name": "Nvme0n1", 00:08:12.683 "aliases": [ 00:08:12.683 "6645bcc5-4de9-45ae-afb8-e6e02c5506ee" 00:08:12.683 ], 00:08:12.683 "product_name": "NVMe disk", 00:08:12.683 "block_size": 4096, 00:08:12.683 "num_blocks": 38912, 00:08:12.683 "uuid": "6645bcc5-4de9-45ae-afb8-e6e02c5506ee", 00:08:12.683 "numa_id": 0, 00:08:12.683 "assigned_rate_limits": { 00:08:12.683 "rw_ios_per_sec": 0, 00:08:12.683 "rw_mbytes_per_sec": 0, 00:08:12.683 "r_mbytes_per_sec": 0, 00:08:12.683 "w_mbytes_per_sec": 0 00:08:12.683 }, 00:08:12.683 "claimed": false, 00:08:12.683 "zoned": false, 00:08:12.683 "supported_io_types": { 00:08:12.683 "read": true, 00:08:12.683 "write": true, 00:08:12.683 "unmap": true, 00:08:12.683 "flush": true, 00:08:12.683 "reset": true, 00:08:12.683 "nvme_admin": true, 00:08:12.683 "nvme_io": true, 00:08:12.683 "nvme_io_md": false, 00:08:12.683 "write_zeroes": true, 00:08:12.683 "zcopy": false, 00:08:12.683 "get_zone_info": false, 00:08:12.683 "zone_management": false, 00:08:12.683 "zone_append": false, 00:08:12.683 "compare": true, 00:08:12.683 "compare_and_write": true, 00:08:12.684 "abort": true, 00:08:12.684 "seek_hole": false, 00:08:12.684 "seek_data": false, 00:08:12.684 "copy": true, 00:08:12.684 "nvme_iov_md": false 00:08:12.684 }, 00:08:12.684 "memory_domains": [ 00:08:12.684 { 00:08:12.684 "dma_device_id": "system", 00:08:12.684 "dma_device_type": 1 00:08:12.684 } 00:08:12.684 ], 00:08:12.684 "driver_specific": { 00:08:12.684 "nvme": [ 00:08:12.684 { 00:08:12.684 "trid": { 00:08:12.684 "trtype": "TCP", 00:08:12.684 "adrfam": "IPv4", 00:08:12.684 "traddr": "10.0.0.2", 00:08:12.684 "trsvcid": "4420", 00:08:12.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:12.684 }, 00:08:12.684 "ctrlr_data": { 00:08:12.684 "cntlid": 1, 00:08:12.684 "vendor_id": "0x8086", 00:08:12.684 "model_number": "SPDK bdev Controller", 00:08:12.684 "serial_number": "SPDK0", 00:08:12.684 "firmware_revision": "25.01", 00:08:12.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.684 "oacs": { 00:08:12.684 "security": 0, 00:08:12.684 "format": 0, 00:08:12.684 "firmware": 0, 00:08:12.684 "ns_manage": 0 00:08:12.684 }, 00:08:12.684 "multi_ctrlr": true, 00:08:12.684 "ana_reporting": false 00:08:12.684 }, 00:08:12.684 "vs": { 00:08:12.684 "nvme_version": "1.3" 00:08:12.684 }, 00:08:12.684 "ns_data": { 00:08:12.684 "id": 1, 00:08:12.684 "can_share": true 00:08:12.684 } 00:08:12.684 } 00:08:12.684 ], 00:08:12.684 "mp_policy": "active_passive" 00:08:12.684 } 00:08:12.684 } 00:08:12.684 ] 00:08:12.684 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1360588 00:08:12.684 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:12.684 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.684 Running I/O for 10 seconds... 00:08:14.070 Latency(us) 00:08:14.070 [2024-12-09T04:01:28.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.070 Nvme0n1 : 1.00 21397.00 83.58 0.00 0.00 0.00 0.00 0.00 00:08:14.070 [2024-12-09T04:01:28.067Z] =================================================================================================================== 00:08:14.070 [2024-12-09T04:01:28.067Z] Total : 21397.00 83.58 0.00 0.00 0.00 0.00 0.00 00:08:14.070 00:08:14.638 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:14.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.898 Nvme0n1 : 2.00 21490.50 83.95 0.00 0.00 0.00 0.00 0.00 00:08:14.898 [2024-12-09T04:01:28.895Z] =================================================================================================================== 00:08:14.898 [2024-12-09T04:01:28.895Z] Total : 21490.50 83.95 0.00 0.00 0.00 0.00 0.00 00:08:14.898 00:08:14.898 true 00:08:14.898 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:14.898 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.158 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:15.158 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:15.158 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1360588 00:08:15.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.727 Nvme0n1 : 3.00 21508.33 84.02 0.00 0.00 0.00 0.00 0.00 00:08:15.727 [2024-12-09T04:01:29.724Z] =================================================================================================================== 00:08:15.727 [2024-12-09T04:01:29.724Z] Total : 21508.33 84.02 0.00 0.00 0.00 0.00 0.00 00:08:15.727 00:08:16.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.682 Nvme0n1 : 4.00 21543.25 84.15 0.00 0.00 0.00 0.00 0.00 00:08:16.682 [2024-12-09T04:01:30.679Z] =================================================================================================================== 00:08:16.682 [2024-12-09T04:01:30.679Z] Total : 21543.25 84.15 0.00 0.00 0.00 0.00 0.00 00:08:16.682 00:08:18.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.064 Nvme0n1 : 5.00 21583.40 84.31 0.00 0.00 0.00 0.00 0.00 00:08:18.064 [2024-12-09T04:01:32.061Z] =================================================================================================================== 00:08:18.064 [2024-12-09T04:01:32.061Z] Total : 21583.40 84.31 0.00 0.00 0.00 0.00 0.00 00:08:18.064 00:08:19.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.001 Nvme0n1 : 6.00 21622.17 84.46 0.00 0.00 0.00 0.00 0.00 00:08:19.001 [2024-12-09T04:01:32.998Z] =================================================================================================================== 00:08:19.001 [2024-12-09T04:01:32.998Z] Total : 21622.17 84.46 0.00 0.00 0.00 0.00 0.00 00:08:19.001 00:08:19.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.940 Nvme0n1 : 7.00 21649.86 84.57 0.00 0.00 0.00 0.00 0.00 00:08:19.940 [2024-12-09T04:01:33.937Z] =================================================================================================================== 00:08:19.940 [2024-12-09T04:01:33.937Z] Total : 21649.86 84.57 0.00 0.00 0.00 0.00 0.00 00:08:19.940 00:08:20.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.880 Nvme0n1 : 8.00 21674.62 84.67 0.00 0.00 0.00 0.00 0.00 00:08:20.880 [2024-12-09T04:01:34.877Z] =================================================================================================================== 00:08:20.880 [2024-12-09T04:01:34.877Z] Total : 21674.62 84.67 0.00 0.00 0.00 0.00 0.00 00:08:20.880 00:08:21.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.820 Nvme0n1 : 9.00 21695.67 84.75 0.00 0.00 0.00 0.00 0.00 00:08:21.820 [2024-12-09T04:01:35.817Z] =================================================================================================================== 00:08:21.820 [2024-12-09T04:01:35.817Z] Total : 21695.67 84.75 0.00 0.00 0.00 0.00 0.00 00:08:21.820 00:08:22.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.761 Nvme0n1 : 10.00 21710.90 84.81 0.00 0.00 0.00 0.00 0.00 00:08:22.761 [2024-12-09T04:01:36.758Z] =================================================================================================================== 00:08:22.761 [2024-12-09T04:01:36.758Z] Total : 21710.90 84.81 0.00 0.00 0.00 0.00 0.00 00:08:22.761 00:08:22.761 00:08:22.761 Latency(us) 00:08:22.761 [2024-12-09T04:01:36.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.761 Nvme0n1 : 10.01 21711.16 84.81 0.00 0.00 5891.37 2689.71 8847.36 00:08:22.761 [2024-12-09T04:01:36.758Z] =================================================================================================================== 00:08:22.761 [2024-12-09T04:01:36.758Z] Total : 21711.16 84.81 0.00 0.00 5891.37 2689.71 8847.36 00:08:22.761 { 00:08:22.761 "results": [ 00:08:22.761 { 00:08:22.761 "job": "Nvme0n1", 00:08:22.761 "core_mask": "0x2", 00:08:22.761 "workload": "randwrite", 00:08:22.761 "status": "finished", 00:08:22.761 "queue_depth": 128, 00:08:22.761 "io_size": 4096, 00:08:22.761 "runtime": 10.005774, 00:08:22.761 "iops": 21711.16397392146, 00:08:22.761 "mibps": 84.8092342731307, 00:08:22.761 "io_failed": 0, 00:08:22.761 "io_timeout": 0, 00:08:22.761 "avg_latency_us": 5891.365378825891, 00:08:22.761 "min_latency_us": 2689.7066666666665, 00:08:22.761 "max_latency_us": 8847.36 00:08:22.761 } 00:08:22.761 ], 00:08:22.761 "core_count": 1 00:08:22.761 } 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1360252 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1360252 ']' 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1360252 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.761 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1360252 00:08:23.022 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.022 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.022 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1360252' 00:08:23.022 killing process with pid 1360252 00:08:23.022 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1360252 00:08:23.022 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.022 00:08:23.022 Latency(us) 00:08:23.022 [2024-12-09T04:01:37.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.022 [2024-12-09T04:01:37.019Z] =================================================================================================================== 00:08:23.022 [2024-12-09T04:01:37.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.022 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1360252 00:08:23.281 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.541 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1356410 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1356410 00:08:23.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1356410 Killed "${NVMF_APP[@]}" "$@" 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1362635 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1362635 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1362635 ']' 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.801 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 [2024-12-09 05:01:37.854241] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:24.061 [2024-12-09 05:01:37.854355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.061 [2024-12-09 05:01:38.012254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.321 [2024-12-09 05:01:38.095750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.321 [2024-12-09 05:01:38.095789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.322 [2024-12-09 05:01:38.095798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.322 [2024-12-09 05:01:38.095806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.322 [2024-12-09 05:01:38.095814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.322 [2024-12-09 05:01:38.096722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.891 [2024-12-09 05:01:38.805790] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:24.891 [2024-12-09 05:01:38.805920] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:24.891 [2024-12-09 05:01:38.805953] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.891 05:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.150 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6645bcc5-4de9-45ae-afb8-e6e02c5506ee -t 2000 00:08:25.411 [ 00:08:25.411 { 00:08:25.411 "name": "6645bcc5-4de9-45ae-afb8-e6e02c5506ee", 00:08:25.411 "aliases": [ 00:08:25.411 "lvs/lvol" 00:08:25.411 ], 00:08:25.411 "product_name": "Logical Volume", 00:08:25.411 "block_size": 4096, 00:08:25.411 "num_blocks": 38912, 00:08:25.411 "uuid": "6645bcc5-4de9-45ae-afb8-e6e02c5506ee", 00:08:25.411 "assigned_rate_limits": { 00:08:25.411 "rw_ios_per_sec": 0, 00:08:25.411 "rw_mbytes_per_sec": 0, 00:08:25.411 "r_mbytes_per_sec": 0, 00:08:25.411 "w_mbytes_per_sec": 0 00:08:25.411 }, 00:08:25.411 "claimed": false, 00:08:25.411 "zoned": false, 00:08:25.411 "supported_io_types": { 00:08:25.411 "read": true, 00:08:25.411 "write": true, 00:08:25.411 "unmap": true, 00:08:25.411 "flush": false, 00:08:25.411 "reset": true, 00:08:25.411 "nvme_admin": false, 00:08:25.411 "nvme_io": false, 00:08:25.411 "nvme_io_md": false, 00:08:25.411 "write_zeroes": true, 00:08:25.411 "zcopy": false, 00:08:25.411 "get_zone_info": false, 00:08:25.411 "zone_management": false, 00:08:25.411 "zone_append": false, 00:08:25.411 "compare": false, 00:08:25.411 "compare_and_write": false, 00:08:25.411 "abort": false, 00:08:25.411 "seek_hole": true, 00:08:25.411 "seek_data": true, 00:08:25.411 "copy": false, 00:08:25.411 "nvme_iov_md": false 00:08:25.411 }, 00:08:25.411 "driver_specific": { 00:08:25.411 "lvol": { 00:08:25.411 "lvol_store_uuid": "3fceec92-ad34-43d4-90b5-b3a22e43dc50", 00:08:25.411 "base_bdev": "aio_bdev", 00:08:25.411 "thin_provision": false, 00:08:25.411 "num_allocated_clusters": 38, 00:08:25.411 "snapshot": false, 00:08:25.411 "clone": false, 00:08:25.411 "esnap_clone": false 00:08:25.411 } 00:08:25.411 } 00:08:25.411 } 00:08:25.411 ] 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:25.411 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:25.708 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:25.708 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.708 [2024-12-09 05:01:39.694245] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.967 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:25.967 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:25.967 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:25.968 request: 00:08:25.968 { 00:08:25.968 "uuid": "3fceec92-ad34-43d4-90b5-b3a22e43dc50", 00:08:25.968 "method": "bdev_lvol_get_lvstores", 00:08:25.968 "req_id": 1 00:08:25.968 } 00:08:25.968 Got JSON-RPC error response 00:08:25.968 response: 00:08:25.968 { 00:08:25.968 "code": -19, 00:08:25.968 "message": "No such device" 00:08:25.968 } 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.968 05:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.227 aio_bdev 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.227 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.487 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6645bcc5-4de9-45ae-afb8-e6e02c5506ee -t 2000 00:08:26.487 [ 00:08:26.487 { 00:08:26.487 "name": "6645bcc5-4de9-45ae-afb8-e6e02c5506ee", 00:08:26.487 "aliases": [ 00:08:26.487 "lvs/lvol" 00:08:26.487 ], 00:08:26.487 "product_name": "Logical Volume", 00:08:26.487 "block_size": 4096, 00:08:26.487 "num_blocks": 38912, 00:08:26.487 "uuid": "6645bcc5-4de9-45ae-afb8-e6e02c5506ee", 00:08:26.487 "assigned_rate_limits": { 00:08:26.487 "rw_ios_per_sec": 0, 00:08:26.487 "rw_mbytes_per_sec": 0, 00:08:26.487 "r_mbytes_per_sec": 0, 00:08:26.487 "w_mbytes_per_sec": 0 00:08:26.487 }, 00:08:26.487 "claimed": false, 00:08:26.487 "zoned": false, 00:08:26.487 "supported_io_types": { 00:08:26.487 "read": true, 00:08:26.487 "write": true, 00:08:26.487 "unmap": true, 00:08:26.487 "flush": false, 00:08:26.487 "reset": true, 00:08:26.487 "nvme_admin": false, 00:08:26.487 "nvme_io": false, 00:08:26.487 "nvme_io_md": false, 00:08:26.487 "write_zeroes": true, 00:08:26.487 "zcopy": false, 00:08:26.487 "get_zone_info": false, 00:08:26.487 "zone_management": false, 00:08:26.487 "zone_append": false, 00:08:26.487 "compare": false, 00:08:26.487 "compare_and_write": false, 00:08:26.487 "abort": false, 00:08:26.487 "seek_hole": true, 00:08:26.487 "seek_data": true, 00:08:26.487 "copy": false, 00:08:26.487 "nvme_iov_md": false 00:08:26.487 }, 00:08:26.487 "driver_specific": { 00:08:26.487 "lvol": { 00:08:26.487 "lvol_store_uuid": "3fceec92-ad34-43d4-90b5-b3a22e43dc50", 00:08:26.487 "base_bdev": "aio_bdev", 00:08:26.487 "thin_provision": false, 00:08:26.487 "num_allocated_clusters": 38, 00:08:26.487 "snapshot": false, 00:08:26.487 "clone": false, 00:08:26.487 "esnap_clone": false 00:08:26.487 } 00:08:26.487 } 00:08:26.487 } 00:08:26.487 ] 00:08:26.487 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:26.487 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:26.487 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.746 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.747 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:26.747 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.006 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.006 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6645bcc5-4de9-45ae-afb8-e6e02c5506ee 00:08:27.006 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fceec92-ad34-43d4-90b5-b3a22e43dc50 00:08:27.266 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.525 00:08:27.525 real 0m17.805s 00:08:27.525 user 0m46.391s 00:08:27.525 sys 0m3.374s 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.525 ************************************ 00:08:27.525 END TEST lvs_grow_dirty 00:08:27.525 ************************************ 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:27.525 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:27.526 nvmf_trace.0 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.526 rmmod nvme_tcp 00:08:27.526 rmmod nvme_fabrics 00:08:27.526 rmmod nvme_keyring 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1362635 ']' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1362635 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1362635 ']' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1362635 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.526 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1362635 00:08:27.786 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.786 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.786 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1362635' 00:08:27.786 killing process with pid 1362635 00:08:27.786 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1362635 00:08:27.786 05:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1362635 00:08:28.356 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.357 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.270 00:08:30.270 real 0m46.045s 00:08:30.270 user 1m9.072s 00:08:30.270 sys 0m11.197s 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.270 ************************************ 00:08:30.270 END TEST nvmf_lvs_grow 00:08:30.270 ************************************ 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.270 05:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.532 ************************************ 00:08:30.532 START TEST nvmf_bdev_io_wait 00:08:30.532 ************************************ 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.532 * Looking for test storage... 00:08:30.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.532 --rc genhtml_branch_coverage=1 00:08:30.532 --rc genhtml_function_coverage=1 00:08:30.532 --rc genhtml_legend=1 00:08:30.532 --rc geninfo_all_blocks=1 00:08:30.532 --rc geninfo_unexecuted_blocks=1 00:08:30.532 00:08:30.532 ' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.532 --rc genhtml_branch_coverage=1 00:08:30.532 --rc genhtml_function_coverage=1 00:08:30.532 --rc genhtml_legend=1 00:08:30.532 --rc geninfo_all_blocks=1 00:08:30.532 --rc geninfo_unexecuted_blocks=1 00:08:30.532 00:08:30.532 ' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.532 --rc genhtml_branch_coverage=1 00:08:30.532 --rc genhtml_function_coverage=1 00:08:30.532 --rc genhtml_legend=1 00:08:30.532 --rc geninfo_all_blocks=1 00:08:30.532 --rc geninfo_unexecuted_blocks=1 00:08:30.532 00:08:30.532 ' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.532 --rc genhtml_branch_coverage=1 00:08:30.532 --rc genhtml_function_coverage=1 00:08:30.532 --rc genhtml_legend=1 00:08:30.532 --rc geninfo_all_blocks=1 00:08:30.532 --rc geninfo_unexecuted_blocks=1 00:08:30.532 00:08:30.532 ' 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.532 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.533 05:01:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.672 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:38.673 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:38.673 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:38.673 Found net devices under 0000:31:00.0: cvl_0_0 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:38.673 Found net devices under 0000:31:00.1: cvl_0_1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.673 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:08:38.673 00:08:38.673 --- 10.0.0.2 ping statistics --- 00:08:38.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.673 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:08:38.673 00:08:38.673 --- 10.0.0.1 ping statistics --- 00:08:38.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.673 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1367854 00:08:38.673 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1367854 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1367854 ']' 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.674 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.674 [2024-12-09 05:01:52.235512] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:38.674 [2024-12-09 05:01:52.235643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.674 [2024-12-09 05:01:52.404089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.674 [2024-12-09 05:01:52.533719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.674 [2024-12-09 05:01:52.533791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.674 [2024-12-09 05:01:52.533805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.674 [2024-12-09 05:01:52.533831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.674 [2024-12-09 05:01:52.533841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.674 [2024-12-09 05:01:52.536912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.674 [2024-12-09 05:01:52.536996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.674 [2024-12-09 05:01:52.537102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.674 [2024-12-09 05:01:52.537128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.246 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 [2024-12-09 05:01:53.267003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 Malloc0 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 [2024-12-09 05:01:53.374834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1368095 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1368097 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.507 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.507 { 00:08:39.507 "params": { 00:08:39.507 "name": "Nvme$subsystem", 00:08:39.507 "trtype": "$TEST_TRANSPORT", 00:08:39.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.507 "adrfam": "ipv4", 00:08:39.507 "trsvcid": "$NVMF_PORT", 00:08:39.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.507 "hdgst": ${hdgst:-false}, 00:08:39.507 "ddgst": ${ddgst:-false} 00:08:39.507 }, 00:08:39.507 "method": "bdev_nvme_attach_controller" 00:08:39.507 } 00:08:39.507 EOF 00:08:39.507 )") 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1368099 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.508 { 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme$subsystem", 00:08:39.508 "trtype": "$TEST_TRANSPORT", 00:08:39.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "$NVMF_PORT", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.508 "hdgst": ${hdgst:-false}, 00:08:39.508 "ddgst": ${ddgst:-false} 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 } 00:08:39.508 EOF 00:08:39.508 )") 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1368102 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.508 { 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme$subsystem", 00:08:39.508 "trtype": "$TEST_TRANSPORT", 00:08:39.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "$NVMF_PORT", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.508 "hdgst": ${hdgst:-false}, 00:08:39.508 "ddgst": ${ddgst:-false} 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 } 00:08:39.508 EOF 00:08:39.508 )") 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.508 { 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme$subsystem", 00:08:39.508 "trtype": "$TEST_TRANSPORT", 00:08:39.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "$NVMF_PORT", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.508 "hdgst": ${hdgst:-false}, 00:08:39.508 "ddgst": ${ddgst:-false} 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 } 00:08:39.508 EOF 00:08:39.508 )") 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1368095 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme1", 00:08:39.508 "trtype": "tcp", 00:08:39.508 "traddr": "10.0.0.2", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "4420", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.508 "hdgst": false, 00:08:39.508 "ddgst": false 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 }' 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme1", 00:08:39.508 "trtype": "tcp", 00:08:39.508 "traddr": "10.0.0.2", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "4420", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.508 "hdgst": false, 00:08:39.508 "ddgst": false 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 }' 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme1", 00:08:39.508 "trtype": "tcp", 00:08:39.508 "traddr": "10.0.0.2", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "4420", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.508 "hdgst": false, 00:08:39.508 "ddgst": false 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 }' 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:39.508 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.508 "params": { 00:08:39.508 "name": "Nvme1", 00:08:39.508 "trtype": "tcp", 00:08:39.508 "traddr": "10.0.0.2", 00:08:39.508 "adrfam": "ipv4", 00:08:39.508 "trsvcid": "4420", 00:08:39.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.508 "hdgst": false, 00:08:39.508 "ddgst": false 00:08:39.508 }, 00:08:39.508 "method": "bdev_nvme_attach_controller" 00:08:39.508 }' 00:08:39.508 [2024-12-09 05:01:53.471237] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:39.508 [2024-12-09 05:01:53.471374] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:39.508 [2024-12-09 05:01:53.474932] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:39.509 [2024-12-09 05:01:53.475047] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:39.509 [2024-12-09 05:01:53.479223] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:39.509 [2024-12-09 05:01:53.479330] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:39.509 [2024-12-09 05:01:53.481037] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:39.509 [2024-12-09 05:01:53.481144] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:39.770 [2024-12-09 05:01:53.747737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.032 [2024-12-09 05:01:53.846095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.032 [2024-12-09 05:01:53.872055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.032 [2024-12-09 05:01:53.940745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.032 [2024-12-09 05:01:53.963863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:40.032 [2024-12-09 05:01:53.994657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.292 [2024-12-09 05:01:54.069689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:40.292 [2024-12-09 05:01:54.115512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:40.292 Running I/O for 1 seconds... 00:08:40.553 Running I/O for 1 seconds... 00:08:40.553 Running I/O for 1 seconds... 00:08:40.553 Running I/O for 1 seconds... 00:08:41.496 9373.00 IOPS, 36.61 MiB/s 00:08:41.496 Latency(us) 00:08:41.496 [2024-12-09T04:01:55.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.496 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:41.496 Nvme1n1 : 1.01 9436.45 36.86 0.00 0.00 13509.13 3249.49 29491.20 00:08:41.496 [2024-12-09T04:01:55.493Z] =================================================================================================================== 00:08:41.496 [2024-12-09T04:01:55.493Z] Total : 9436.45 36.86 0.00 0.00 13509.13 3249.49 29491.20 00:08:41.496 9733.00 IOPS, 38.02 MiB/s 00:08:41.496 Latency(us) 00:08:41.496 [2024-12-09T04:01:55.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.496 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:41.496 Nvme1n1 : 1.01 9802.41 38.29 0.00 0.00 13014.32 4969.81 20753.07 00:08:41.496 [2024-12-09T04:01:55.493Z] =================================================================================================================== 00:08:41.496 [2024-12-09T04:01:55.493Z] Total : 9802.41 38.29 0.00 0.00 13014.32 4969.81 20753.07 00:08:41.496 10426.00 IOPS, 40.73 MiB/s 00:08:41.496 Latency(us) 00:08:41.496 [2024-12-09T04:01:55.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.496 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:41.496 Nvme1n1 : 1.01 10507.33 41.04 0.00 0.00 12140.08 4833.28 23811.41 00:08:41.496 [2024-12-09T04:01:55.493Z] =================================================================================================================== 00:08:41.496 [2024-12-09T04:01:55.493Z] Total : 10507.33 41.04 0.00 0.00 12140.08 4833.28 23811.41 00:08:41.756 164672.00 IOPS, 643.25 MiB/s 00:08:41.756 Latency(us) 00:08:41.756 [2024-12-09T04:01:55.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.756 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:41.756 Nvme1n1 : 1.00 164326.04 641.90 0.00 0.00 774.64 341.33 2075.31 00:08:41.756 [2024-12-09T04:01:55.753Z] =================================================================================================================== 00:08:41.756 [2024-12-09T04:01:55.753Z] Total : 164326.04 641.90 0.00 0.00 774.64 341.33 2075.31 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1368097 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1368099 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1368102 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.016 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.016 rmmod nvme_tcp 00:08:42.277 rmmod nvme_fabrics 00:08:42.277 rmmod nvme_keyring 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1367854 ']' 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1367854 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1367854 ']' 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1367854 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367854 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367854' 00:08:42.277 killing process with pid 1367854 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1367854 00:08:42.277 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1367854 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.218 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.148 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.148 00:08:45.148 real 0m14.690s 00:08:45.148 user 0m25.989s 00:08:45.148 sys 0m8.228s 00:08:45.148 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.148 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.148 ************************************ 00:08:45.148 END TEST nvmf_bdev_io_wait 00:08:45.148 ************************************ 00:08:45.148 05:01:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.148 05:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.148 05:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.148 05:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.148 ************************************ 00:08:45.148 START TEST nvmf_queue_depth 00:08:45.148 ************************************ 00:08:45.148 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.410 * Looking for test storage... 00:08:45.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.410 --rc genhtml_branch_coverage=1 00:08:45.410 --rc genhtml_function_coverage=1 00:08:45.410 --rc genhtml_legend=1 00:08:45.410 --rc geninfo_all_blocks=1 00:08:45.410 --rc geninfo_unexecuted_blocks=1 00:08:45.410 00:08:45.410 ' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.410 --rc genhtml_branch_coverage=1 00:08:45.410 --rc genhtml_function_coverage=1 00:08:45.410 --rc genhtml_legend=1 00:08:45.410 --rc geninfo_all_blocks=1 00:08:45.410 --rc geninfo_unexecuted_blocks=1 00:08:45.410 00:08:45.410 ' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.410 --rc genhtml_branch_coverage=1 00:08:45.410 --rc genhtml_function_coverage=1 00:08:45.410 --rc genhtml_legend=1 00:08:45.410 --rc geninfo_all_blocks=1 00:08:45.410 --rc geninfo_unexecuted_blocks=1 00:08:45.410 00:08:45.410 ' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.410 --rc genhtml_branch_coverage=1 00:08:45.410 --rc genhtml_function_coverage=1 00:08:45.410 --rc genhtml_legend=1 00:08:45.410 --rc geninfo_all_blocks=1 00:08:45.410 --rc geninfo_unexecuted_blocks=1 00:08:45.410 00:08:45.410 ' 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.410 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.411 05:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.551 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:53.552 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:53.552 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:53.552 Found net devices under 0000:31:00.0: cvl_0_0 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:53.552 Found net devices under 0000:31:00.1: cvl_0_1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:08:53.552 00:08:53.552 --- 10.0.0.2 ping statistics --- 00:08:53.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.552 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:08:53.552 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:53.552 00:08:53.552 --- 10.0.0.1 ping statistics --- 00:08:53.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.552 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1373170 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1373170 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1373170 ']' 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.553 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.553 [2024-12-09 05:02:07.038318] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:53.553 [2024-12-09 05:02:07.038456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.553 [2024-12-09 05:02:07.204664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.553 [2024-12-09 05:02:07.327538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.553 [2024-12-09 05:02:07.327601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.553 [2024-12-09 05:02:07.327614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.553 [2024-12-09 05:02:07.327628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.553 [2024-12-09 05:02:07.327640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.553 [2024-12-09 05:02:07.329110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.813 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.813 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:53.813 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.813 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.813 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 [2024-12-09 05:02:07.847504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 Malloc0 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 [2024-12-09 05:02:07.964178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1373306 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1373306 /var/tmp/bdevperf.sock 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1373306 ']' 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.075 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.075 [2024-12-09 05:02:08.060096] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:54.075 [2024-12-09 05:02:08.060223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1373306 ] 00:08:54.336 [2024-12-09 05:02:08.217892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.602 [2024-12-09 05:02:08.344978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.863 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.863 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:54.863 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:54.863 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.863 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.123 NVMe0n1 00:08:55.123 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.123 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.123 Running I/O for 10 seconds... 00:08:57.447 8192.00 IOPS, 32.00 MiB/s [2024-12-09T04:02:12.386Z] 9218.50 IOPS, 36.01 MiB/s [2024-12-09T04:02:13.327Z] 9621.33 IOPS, 37.58 MiB/s [2024-12-09T04:02:14.267Z] 9967.50 IOPS, 38.94 MiB/s [2024-12-09T04:02:15.216Z] 10242.40 IOPS, 40.01 MiB/s [2024-12-09T04:02:16.154Z] 10505.33 IOPS, 41.04 MiB/s [2024-12-09T04:02:17.144Z] 10710.29 IOPS, 41.84 MiB/s [2024-12-09T04:02:18.523Z] 10905.75 IOPS, 42.60 MiB/s [2024-12-09T04:02:19.460Z] 11041.22 IOPS, 43.13 MiB/s [2024-12-09T04:02:19.460Z] 11184.00 IOPS, 43.69 MiB/s 00:09:05.463 Latency(us) 00:09:05.463 [2024-12-09T04:02:19.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.463 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:05.463 Verification LBA range: start 0x0 length 0x4000 00:09:05.463 NVMe0n1 : 10.04 11223.22 43.84 0.00 0.00 90910.52 7372.80 85196.80 00:09:05.463 [2024-12-09T04:02:19.460Z] =================================================================================================================== 00:09:05.463 [2024-12-09T04:02:19.460Z] Total : 11223.22 43.84 0.00 0.00 90910.52 7372.80 85196.80 00:09:05.463 { 00:09:05.463 "results": [ 00:09:05.463 { 00:09:05.463 "job": "NVMe0n1", 00:09:05.463 "core_mask": "0x1", 00:09:05.463 "workload": "verify", 00:09:05.463 "status": "finished", 00:09:05.463 "verify_range": { 00:09:05.463 "start": 0, 00:09:05.463 "length": 16384 00:09:05.463 }, 00:09:05.463 "queue_depth": 1024, 00:09:05.463 "io_size": 4096, 00:09:05.463 "runtime": 10.044172, 00:09:05.463 "iops": 11223.224771539157, 00:09:05.463 "mibps": 43.84072176382483, 00:09:05.463 "io_failed": 0, 00:09:05.463 "io_timeout": 0, 00:09:05.463 "avg_latency_us": 90910.52451635795, 00:09:05.463 "min_latency_us": 7372.8, 00:09:05.463 "max_latency_us": 85196.8 00:09:05.463 } 00:09:05.463 ], 00:09:05.463 "core_count": 1 00:09:05.463 } 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1373306 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1373306 ']' 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1373306 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1373306 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1373306' 00:09:05.463 killing process with pid 1373306 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1373306 00:09:05.463 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.463 00:09:05.463 Latency(us) 00:09:05.463 [2024-12-09T04:02:19.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.463 [2024-12-09T04:02:19.460Z] =================================================================================================================== 00:09:05.463 [2024-12-09T04:02:19.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.463 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1373306 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.722 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.722 rmmod nvme_tcp 00:09:05.722 rmmod nvme_fabrics 00:09:05.982 rmmod nvme_keyring 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1373170 ']' 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1373170 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1373170 ']' 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1373170 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.982 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1373170 00:09:05.983 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.983 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.983 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1373170' 00:09:05.983 killing process with pid 1373170 00:09:05.983 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1373170 00:09:05.983 05:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1373170 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.553 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.554 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.554 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.097 00:09:09.097 real 0m23.507s 00:09:09.097 user 0m26.760s 00:09:09.097 sys 0m7.350s 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.097 ************************************ 00:09:09.097 END TEST nvmf_queue_depth 00:09:09.097 ************************************ 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.097 ************************************ 00:09:09.097 START TEST nvmf_target_multipath 00:09:09.097 ************************************ 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.097 * Looking for test storage... 00:09:09.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.097 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.098 --rc genhtml_branch_coverage=1 00:09:09.098 --rc genhtml_function_coverage=1 00:09:09.098 --rc genhtml_legend=1 00:09:09.098 --rc geninfo_all_blocks=1 00:09:09.098 --rc geninfo_unexecuted_blocks=1 00:09:09.098 00:09:09.098 ' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.098 --rc genhtml_branch_coverage=1 00:09:09.098 --rc genhtml_function_coverage=1 00:09:09.098 --rc genhtml_legend=1 00:09:09.098 --rc geninfo_all_blocks=1 00:09:09.098 --rc geninfo_unexecuted_blocks=1 00:09:09.098 00:09:09.098 ' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.098 --rc genhtml_branch_coverage=1 00:09:09.098 --rc genhtml_function_coverage=1 00:09:09.098 --rc genhtml_legend=1 00:09:09.098 --rc geninfo_all_blocks=1 00:09:09.098 --rc geninfo_unexecuted_blocks=1 00:09:09.098 00:09:09.098 ' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.098 --rc genhtml_branch_coverage=1 00:09:09.098 --rc genhtml_function_coverage=1 00:09:09.098 --rc genhtml_legend=1 00:09:09.098 --rc geninfo_all_blocks=1 00:09:09.098 --rc geninfo_unexecuted_blocks=1 00:09:09.098 00:09:09.098 ' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.098 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.099 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.230 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.230 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.230 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.230 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.231 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:09:17.231 00:09:17.231 --- 10.0.0.2 ping statistics --- 00:09:17.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.231 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:09:17.231 00:09:17.231 --- 10.0.0.1 ping statistics --- 00:09:17.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.231 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:17.231 only one NIC for nvmf test 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.231 rmmod nvme_tcp 00:09:17.231 rmmod nvme_fabrics 00:09:17.231 rmmod nvme_keyring 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.231 05:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.145 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.146 00:09:19.146 real 0m10.032s 00:09:19.146 user 0m2.127s 00:09:19.146 sys 0m5.839s 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:19.146 ************************************ 00:09:19.146 END TEST nvmf_target_multipath 00:09:19.146 ************************************ 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.146 ************************************ 00:09:19.146 START TEST nvmf_zcopy 00:09:19.146 ************************************ 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.146 * Looking for test storage... 00:09:19.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.146 --rc genhtml_branch_coverage=1 00:09:19.146 --rc genhtml_function_coverage=1 00:09:19.146 --rc genhtml_legend=1 00:09:19.146 --rc geninfo_all_blocks=1 00:09:19.146 --rc geninfo_unexecuted_blocks=1 00:09:19.146 00:09:19.146 ' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.146 --rc genhtml_branch_coverage=1 00:09:19.146 --rc genhtml_function_coverage=1 00:09:19.146 --rc genhtml_legend=1 00:09:19.146 --rc geninfo_all_blocks=1 00:09:19.146 --rc geninfo_unexecuted_blocks=1 00:09:19.146 00:09:19.146 ' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.146 --rc genhtml_branch_coverage=1 00:09:19.146 --rc genhtml_function_coverage=1 00:09:19.146 --rc genhtml_legend=1 00:09:19.146 --rc geninfo_all_blocks=1 00:09:19.146 --rc geninfo_unexecuted_blocks=1 00:09:19.146 00:09:19.146 ' 00:09:19.146 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.147 --rc genhtml_branch_coverage=1 00:09:19.147 --rc genhtml_function_coverage=1 00:09:19.147 --rc genhtml_legend=1 00:09:19.147 --rc geninfo_all_blocks=1 00:09:19.147 --rc geninfo_unexecuted_blocks=1 00:09:19.147 00:09:19.147 ' 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.147 05:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.147 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.148 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:27.286 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:27.286 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:27.286 Found net devices under 0000:31:00.0: cvl_0_0 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.286 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:27.287 Found net devices under 0000:31:00.1: cvl_0_1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:09:27.287 00:09:27.287 --- 10.0.0.2 ping statistics --- 00:09:27.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.287 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:27.287 00:09:27.287 --- 10.0.0.1 ping statistics --- 00:09:27.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.287 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1384287 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1384287 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1384287 ']' 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.287 05:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.287 [2024-12-09 05:02:40.682290] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:27.287 [2024-12-09 05:02:40.682415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.287 [2024-12-09 05:02:40.848373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.287 [2024-12-09 05:02:40.971845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.287 [2024-12-09 05:02:40.971918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.287 [2024-12-09 05:02:40.971932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.287 [2024-12-09 05:02:40.971945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.287 [2024-12-09 05:02:40.971959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.287 [2024-12-09 05:02:40.973484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.548 [2024-12-09 05:02:41.524347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.548 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.549 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.809 [2024-12-09 05:02:41.548450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.809 malloc0 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.809 { 00:09:27.809 "params": { 00:09:27.809 "name": "Nvme$subsystem", 00:09:27.809 "trtype": "$TEST_TRANSPORT", 00:09:27.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.809 "adrfam": "ipv4", 00:09:27.809 "trsvcid": "$NVMF_PORT", 00:09:27.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.809 "hdgst": ${hdgst:-false}, 00:09:27.809 "ddgst": ${ddgst:-false} 00:09:27.809 }, 00:09:27.809 "method": "bdev_nvme_attach_controller" 00:09:27.809 } 00:09:27.809 EOF 00:09:27.809 )") 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:27.809 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.809 "params": { 00:09:27.809 "name": "Nvme1", 00:09:27.809 "trtype": "tcp", 00:09:27.809 "traddr": "10.0.0.2", 00:09:27.809 "adrfam": "ipv4", 00:09:27.809 "trsvcid": "4420", 00:09:27.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.809 "hdgst": false, 00:09:27.809 "ddgst": false 00:09:27.809 }, 00:09:27.809 "method": "bdev_nvme_attach_controller" 00:09:27.809 }' 00:09:27.809 [2024-12-09 05:02:41.708958] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:27.809 [2024-12-09 05:02:41.709087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384566 ] 00:09:28.070 [2024-12-09 05:02:41.864730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.070 [2024-12-09 05:02:41.990108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.641 Running I/O for 10 seconds... 00:09:30.524 6453.00 IOPS, 50.41 MiB/s [2024-12-09T04:02:45.905Z] 7618.00 IOPS, 59.52 MiB/s [2024-12-09T04:02:46.846Z] 8015.00 IOPS, 62.62 MiB/s [2024-12-09T04:02:47.788Z] 8223.50 IOPS, 64.25 MiB/s [2024-12-09T04:02:48.727Z] 8345.40 IOPS, 65.20 MiB/s [2024-12-09T04:02:49.666Z] 8428.33 IOPS, 65.85 MiB/s [2024-12-09T04:02:50.606Z] 8477.29 IOPS, 66.23 MiB/s [2024-12-09T04:02:51.545Z] 8512.62 IOPS, 66.50 MiB/s [2024-12-09T04:02:52.933Z] 8546.22 IOPS, 66.77 MiB/s [2024-12-09T04:02:52.933Z] 8570.60 IOPS, 66.96 MiB/s 00:09:38.936 Latency(us) 00:09:38.936 [2024-12-09T04:02:52.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.936 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:38.936 Verification LBA range: start 0x0 length 0x1000 00:09:38.936 Nvme1n1 : 10.01 8574.40 66.99 0.00 0.00 14878.93 856.75 31457.28 00:09:38.936 [2024-12-09T04:02:52.934Z] =================================================================================================================== 00:09:38.937 [2024-12-09T04:02:52.934Z] Total : 8574.40 66.99 0.00 0.00 14878.93 856.75 31457.28 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1386657 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.197 { 00:09:39.197 "params": { 00:09:39.197 "name": "Nvme$subsystem", 00:09:39.197 "trtype": "$TEST_TRANSPORT", 00:09:39.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.197 "adrfam": "ipv4", 00:09:39.197 "trsvcid": "$NVMF_PORT", 00:09:39.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.197 "hdgst": ${hdgst:-false}, 00:09:39.197 "ddgst": ${ddgst:-false} 00:09:39.197 }, 00:09:39.197 "method": "bdev_nvme_attach_controller" 00:09:39.197 } 00:09:39.197 EOF 00:09:39.197 )") 00:09:39.197 [2024-12-09 05:02:52.964865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:52.964905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:39.197 05:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.197 "params": { 00:09:39.197 "name": "Nvme1", 00:09:39.197 "trtype": "tcp", 00:09:39.197 "traddr": "10.0.0.2", 00:09:39.197 "adrfam": "ipv4", 00:09:39.197 "trsvcid": "4420", 00:09:39.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.197 "hdgst": false, 00:09:39.197 "ddgst": false 00:09:39.197 }, 00:09:39.197 "method": "bdev_nvme_attach_controller" 00:09:39.197 }' 00:09:39.197 [2024-12-09 05:02:52.976853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:52.976877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:52.988859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:52.988877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.000889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.000907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.012916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.012933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.024934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.024952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.036982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.036999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.044761] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:39.197 [2024-12-09 05:02:53.044896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386657 ] 00:09:39.197 [2024-12-09 05:02:53.049003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.049020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.061029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.061046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.073072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.073089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.085087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.085104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.097130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.097147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.109157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.109175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.121187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.121205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.133225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.133242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.145253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.145270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.157278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.157295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.169318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.169335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.197 [2024-12-09 05:02:53.178638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.197 [2024-12-09 05:02:53.181354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.197 [2024-12-09 05:02:53.181371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.193381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.193399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.205412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.205430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.217432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.217450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.229473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.229492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.241505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.241522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.253524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.253542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.254145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.459 [2024-12-09 05:02:53.265574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.265592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.277597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.277615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.289638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.289655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.301667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.301685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.313701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.313720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.325748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.325766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.337762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.337780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.349784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.349801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.361836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.361854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.373854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.373872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.385902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.385921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.397926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.397944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.409947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.409964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.421987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.422004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.434019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.434037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.459 [2024-12-09 05:02:53.446039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.459 [2024-12-09 05:02:53.446056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.458469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.458489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.470478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.470497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.482522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.482540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.494550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.494567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.506577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.506595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.518619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.518636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.530655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.530673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.542681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.542699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.554716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.554735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.566749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.566768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 Running I/O for 5 seconds... 00:09:39.720 [2024-12-09 05:02:53.583163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.583185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.593792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.593812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.608288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.608313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.622269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.622290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.636085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.636112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.649652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.649674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.663603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.663625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.720 [2024-12-09 05:02:53.677752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.720 [2024-12-09 05:02:53.677773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.721 [2024-12-09 05:02:53.691266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.721 [2024-12-09 05:02:53.691287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.721 [2024-12-09 05:02:53.705092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.721 [2024-12-09 05:02:53.705112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.718830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.718849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.732204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.732224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.746286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.746308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.757090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.757110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.770959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.770979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.784762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.784783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.798571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.798591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.811986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.812005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.825499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.825519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.838765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.838784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.852272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.852291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.865833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.865856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.879731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.879750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.893550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.893569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.907252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.907271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.920981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.921001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.934218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.934238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.947476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.947495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.961362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.961382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.981 [2024-12-09 05:02:53.974698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.981 [2024-12-09 05:02:53.974718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.241 [2024-12-09 05:02:53.988746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.241 [2024-12-09 05:02:53.988766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.241 [2024-12-09 05:02:54.002503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.241 [2024-12-09 05:02:54.002523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.241 [2024-12-09 05:02:54.016338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.241 [2024-12-09 05:02:54.016358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.241 [2024-12-09 05:02:54.029912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.029931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.043724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.043743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.055256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.055275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.068922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.068941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.082674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.082695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.096460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.096480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.109678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.109698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.123763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.123786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.138222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.138241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.151504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.151523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.165890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.165911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.179760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.179779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.193580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.193599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.206988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.207008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.220560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.220580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-12-09 05:02:54.234456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-12-09 05:02:54.234476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.247945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.247965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.261654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.261674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.275265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.275286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.289314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.289333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.303148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.303167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.316982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.317003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.502 [2024-12-09 05:02:54.331045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.502 [2024-12-09 05:02:54.331065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.342662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.342682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.357800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.357826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.373194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.373216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.386909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.386933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.400536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.400556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.414122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.414141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.427695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.427714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.441718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.441738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.453057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.453077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.467388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.467408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.480999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.481018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.503 [2024-12-09 05:02:54.494645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.503 [2024-12-09 05:02:54.494665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.508732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.508759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.520013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.520034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.533806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.533831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.547416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.547435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.561292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.561311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 17169.00 IOPS, 134.13 MiB/s [2024-12-09T04:02:54.760Z] [2024-12-09 05:02:54.575176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.575196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.588922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.588941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.602496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.602515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.616303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.616322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.630217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.630236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.644060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.644080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.657668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.657689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.671641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.671661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.682959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.682979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.696772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.696791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.710008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.710028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.724194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.724212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.739983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.740003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.763 [2024-12-09 05:02:54.753849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.763 [2024-12-09 05:02:54.753869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.767545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.767566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.781247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.781268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.795058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.795077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.808813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.808841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.822159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.822179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.836007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.836027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.849260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.849279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.863140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.863162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.876424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.876443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.890390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.890410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.901781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.901800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.915613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.915634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.929635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.929655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.942905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.942925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.957174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.957194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.972530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.972549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.986138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.986158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:54.999954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:54.999975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.024 [2024-12-09 05:02:55.013413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.024 [2024-12-09 05:02:55.013433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.027169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.027190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.040910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.040931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.054485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.054506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.068121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.068141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.081996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.082015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.294 [2024-12-09 05:02:55.093030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.294 [2024-12-09 05:02:55.093050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.107378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.107398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.121150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.121170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.134493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.134513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.148164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.148188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.162111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.162130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.175886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.175906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.189550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.189571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.202766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.202785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.216927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.216947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.227596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.227616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.241500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.241519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.255211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.255231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.269008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.269028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.295 [2024-12-09 05:02:55.282756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.295 [2024-12-09 05:02:55.282776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.296306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.296326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.310230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.310251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.324051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.324071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.337991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.338013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.351447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.351467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.365197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.365218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.378752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.378779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.392360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.392380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.405728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.405753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.419359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.419380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.432919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.432940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.446535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.446555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.460377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.460398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.474149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.474169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.487358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.487378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.501371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.501391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.515218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.515237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.528858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.528877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.556 [2024-12-09 05:02:55.542718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.556 [2024-12-09 05:02:55.542739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.556307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.556327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.570337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.570356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 17268.00 IOPS, 134.91 MiB/s [2024-12-09T04:02:55.813Z] [2024-12-09 05:02:55.583827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.583846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.598178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.598197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.610102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.610121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.624067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.624086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.637857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.637877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.651742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.651761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.665634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.665659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.679779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.679799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.690033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.690052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.703866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.703886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.717588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.717608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.731232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.731251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.744740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.744760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.758623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.816 [2024-12-09 05:02:55.758643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.816 [2024-12-09 05:02:55.772215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.817 [2024-12-09 05:02:55.772235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.817 [2024-12-09 05:02:55.785984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.817 [2024-12-09 05:02:55.786003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.817 [2024-12-09 05:02:55.800111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.817 [2024-12-09 05:02:55.800130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.815530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.815550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.829520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.829540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.843405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.843424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.857248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.857267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.871507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.871527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.887433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.887453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.901109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.901128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.914553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.914573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.928641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.928665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.940088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.940108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.953809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.953834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.966882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.966901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.980812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.980836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:55.994386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:55.994405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:56.008458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:56.008478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:56.022452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:56.022472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:56.034323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:56.034343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:56.048195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:56.048215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.077 [2024-12-09 05:02:56.061720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.077 [2024-12-09 05:02:56.061739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.074932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.074952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.088603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.088622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.102146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.102165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.115515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.115535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.129222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.129242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.142941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.142960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.156854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.156873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.170775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.170795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.184288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.184307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.198017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.198036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.211769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.211789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.225658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.225677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.239147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.239166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.252990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.253015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.266833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.266852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.280567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.280586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.294466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.294485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.308147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.308166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.319475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.319495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.338 [2024-12-09 05:02:56.333357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.338 [2024-12-09 05:02:56.333376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.347227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.347247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.358375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.358395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.372576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.372596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.386119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.386139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.399124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.399143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.412779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.412798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.427094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.427114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.442941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.442961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.456905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.456925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.470529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.470549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.484447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.484466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.498162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.498181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.511673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.511693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.525455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.525475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.539369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.539389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.553255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.553274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.566899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.566920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 17291.00 IOPS, 135.09 MiB/s [2024-12-09T04:02:56.598Z] [2024-12-09 05:02:56.580760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.580780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.601 [2024-12-09 05:02:56.595286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.601 [2024-12-09 05:02:56.595305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.607042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.607063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.621193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.621214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.634903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.634923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.648289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.648309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.661900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.661920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.675460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.675481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.689022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.689045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.702556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.702577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.716303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.716323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.730251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.730270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.742340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.742360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.756111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.756131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.769648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.769668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.783416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.783436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.796867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.796887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.810685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.810705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.824212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.824232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.837732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.837753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.862 [2024-12-09 05:02:56.851893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.862 [2024-12-09 05:02:56.851912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.867268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.867288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.881210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.881230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.895024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.895044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.908866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.908885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.922778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.922799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.936863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.936884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.947976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.948000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.962271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.962291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.976301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.976321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:56.989531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:56.989551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.003303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.003323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.016964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.016984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.030763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.030783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.044495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.044516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.058043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.058063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.072023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.072043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.083219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.083239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.097224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.097244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.124 [2024-12-09 05:02:57.111107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.124 [2024-12-09 05:02:57.111127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.385 [2024-12-09 05:02:57.123220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.385 [2024-12-09 05:02:57.123248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.385 [2024-12-09 05:02:57.136966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.385 [2024-12-09 05:02:57.136985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.385 [2024-12-09 05:02:57.150487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.385 [2024-12-09 05:02:57.150507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.385 [2024-12-09 05:02:57.164241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.385 [2024-12-09 05:02:57.164261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.177973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.177993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.191506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.191526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.205342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.205366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.219339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.219358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.233365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.233384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.244596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.244615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.259095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.259114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.272708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.272728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.286671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.286690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.300233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.300252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.313749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.313768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.327296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.327315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.340842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.340861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.354523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.354542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.386 [2024-12-09 05:02:57.367971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.386 [2024-12-09 05:02:57.367990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.645 [2024-12-09 05:02:57.381966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.381985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.395698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.395717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.409260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.409279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.422855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.422874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.436703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.436723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.450113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.450133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.463942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.463967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.477141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.477161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.491336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.491356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.504847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.504867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.519001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.519020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.531421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.531440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.545612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.545631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.557131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.557150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.570813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.570838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 17312.50 IOPS, 135.25 MiB/s [2024-12-09T04:02:57.643Z] [2024-12-09 05:02:57.584508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.584527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.598200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.598219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.611978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.611997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.625747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.625767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-12-09 05:02:57.639274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-12-09 05:02:57.639293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.653241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.653260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.666765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.666784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.680795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.680814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.694530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.694550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.707970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.707989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.722017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.722038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.732826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.732845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.747280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.747300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.760865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.760884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.774804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.774828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.788743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.788762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.800525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.800544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.814346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.814366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.828150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.828170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.839605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.839625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.853983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.854002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.867691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.867710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.881793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.881812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.906 [2024-12-09 05:02:57.893648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.906 [2024-12-09 05:02:57.893667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-09 05:02:57.907213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.907233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.921330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.921350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.934747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.934767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.948260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.948281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.962208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.962228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.975706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.975726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:57.989483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:57.989509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.003146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.003166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.016914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.016933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.030540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.030560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.044637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.044656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.055950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.055969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.069811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.069837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.083371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.083390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.097565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.097584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.109087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.109106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.123256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.123276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.137549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.137568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-12-09 05:02:58.152906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-12-09 05:02:58.152926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.166975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.166995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.181115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.181133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.196485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.196504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.210384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.210404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.224041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.224061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.237902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.237922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.249075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.249094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.263658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.263678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.277796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.277823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.288699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.288719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.302580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.302600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.315958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.315979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.329945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.329965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.343562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.343582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.357235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.357255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.371275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.371295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.383234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.383254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.397349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.397370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.428 [2024-12-09 05:02:58.411316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.428 [2024-12-09 05:02:58.411336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.425232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.425253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.438858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.438878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.452427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.452447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.465513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.465534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.479564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.479589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.493199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.493219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.506931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.506951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.521018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.521039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.532135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.532155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.546410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.546430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.559960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.559980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.573925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.573945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 17307.60 IOPS, 135.22 MiB/s [2024-12-09T04:02:58.686Z] [2024-12-09 05:02:58.586465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.586484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 00:09:44.689 Latency(us) 00:09:44.689 [2024-12-09T04:02:58.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.689 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:44.689 Nvme1n1 : 5.01 17310.47 135.24 0.00 0.00 7387.39 3522.56 18131.63 00:09:44.689 [2024-12-09T04:02:58.686Z] =================================================================================================================== 00:09:44.689 [2024-12-09T04:02:58.686Z] Total : 17310.47 135.24 0.00 0.00 7387.39 3522.56 18131.63 00:09:44.689 [2024-12-09 05:02:58.596272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.596292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.608286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.608305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.620352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.620375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.632374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.632392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.644381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.644399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.656420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.656438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.668450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.668469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.689 [2024-12-09 05:02:58.680488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.689 [2024-12-09 05:02:58.680510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.692513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.692531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.704532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.704550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.716572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.716589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.728618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.728636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.740622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.740640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.752677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.752697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.764693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.764711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.776740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.776757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.788776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.788797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.800786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.800804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.812838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.812857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.824860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.824892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.836881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.836899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.848921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.848939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.860945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.860962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.872988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.873007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.885017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.885034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.897043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.897061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.909084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.909101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.921123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.921140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.933136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.933154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.950 [2024-12-09 05:02:58.945180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.950 [2024-12-09 05:02:58.945198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:58.957199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:58.957217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:58.969242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:58.969260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:58.981269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:58.981285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:58.993292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:58.993309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:59.005345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:59.005362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:59.017366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:59.017384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 [2024-12-09 05:02:59.029388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.210 [2024-12-09 05:02:59.029405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1386657) - No such process 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1386657 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.210 delay0 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.210 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:45.469 [2024-12-09 05:02:59.238256] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:52.053 Initializing NVMe Controllers 00:09:52.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.053 Initialization complete. Launching workers. 00:09:52.053 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 123 00:09:52.053 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 40 00:09:52.053 success 213, unsuccessful 190, failed 0 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.053 rmmod nvme_tcp 00:09:52.053 rmmod nvme_fabrics 00:09:52.053 rmmod nvme_keyring 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1384287 ']' 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1384287 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1384287 ']' 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1384287 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1384287 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1384287' 00:09:52.053 killing process with pid 1384287 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1384287 00:09:52.053 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1384287 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.313 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.314 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.314 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.314 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.314 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.314 05:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.866 00:09:54.866 real 0m35.579s 00:09:54.866 user 0m48.696s 00:09:54.866 sys 0m10.301s 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.866 ************************************ 00:09:54.866 END TEST nvmf_zcopy 00:09:54.866 ************************************ 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.866 ************************************ 00:09:54.866 START TEST nvmf_nmic 00:09:54.866 ************************************ 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:54.866 * Looking for test storage... 00:09:54.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.866 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.867 --rc genhtml_branch_coverage=1 00:09:54.867 --rc genhtml_function_coverage=1 00:09:54.867 --rc genhtml_legend=1 00:09:54.867 --rc geninfo_all_blocks=1 00:09:54.867 --rc geninfo_unexecuted_blocks=1 00:09:54.867 00:09:54.867 ' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.867 --rc genhtml_branch_coverage=1 00:09:54.867 --rc genhtml_function_coverage=1 00:09:54.867 --rc genhtml_legend=1 00:09:54.867 --rc geninfo_all_blocks=1 00:09:54.867 --rc geninfo_unexecuted_blocks=1 00:09:54.867 00:09:54.867 ' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.867 --rc genhtml_branch_coverage=1 00:09:54.867 --rc genhtml_function_coverage=1 00:09:54.867 --rc genhtml_legend=1 00:09:54.867 --rc geninfo_all_blocks=1 00:09:54.867 --rc geninfo_unexecuted_blocks=1 00:09:54.867 00:09:54.867 ' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.867 --rc genhtml_branch_coverage=1 00:09:54.867 --rc genhtml_function_coverage=1 00:09:54.867 --rc genhtml_legend=1 00:09:54.867 --rc geninfo_all_blocks=1 00:09:54.867 --rc geninfo_unexecuted_blocks=1 00:09:54.867 00:09:54.867 ' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.867 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:03.005 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:03.005 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.005 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:03.006 Found net devices under 0000:31:00.0: cvl_0_0 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:03.006 Found net devices under 0000:31:00.1: cvl_0_1 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.006 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:10:03.006 00:10:03.006 --- 10.0.0.2 ping statistics --- 00:10:03.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.006 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:10:03.006 00:10:03.006 --- 10.0.0.1 ping statistics --- 00:10:03.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.006 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1394194 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1394194 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1394194 ']' 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.006 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.006 [2024-12-09 05:03:16.378774] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:03.006 [2024-12-09 05:03:16.378921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.006 [2024-12-09 05:03:16.545314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.006 [2024-12-09 05:03:16.674472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.006 [2024-12-09 05:03:16.674543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.006 [2024-12-09 05:03:16.674556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.006 [2024-12-09 05:03:16.674570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.006 [2024-12-09 05:03:16.674580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.006 [2024-12-09 05:03:16.677511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.006 [2024-12-09 05:03:16.677648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.006 [2024-12-09 05:03:16.677751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.006 [2024-12-09 05:03:16.677777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.266 [2024-12-09 05:03:17.217544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.266 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 Malloc0 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 [2024-12-09 05:03:17.338090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.526 test case1: single bdev can't be used in multiple subsystems 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 [2024-12-09 05:03:17.373775] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.526 [2024-12-09 05:03:17.373834] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.526 [2024-12-09 05:03:17.373854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.526 request: 00:10:03.526 { 00:10:03.526 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.526 "namespace": { 00:10:03.526 "bdev_name": "Malloc0", 00:10:03.526 "no_auto_visible": false, 00:10:03.526 "hide_metadata": false 00:10:03.526 }, 00:10:03.526 "method": "nvmf_subsystem_add_ns", 00:10:03.526 "req_id": 1 00:10:03.526 } 00:10:03.526 Got JSON-RPC error response 00:10:03.526 response: 00:10:03.526 { 00:10:03.526 "code": -32602, 00:10:03.526 "message": "Invalid parameters" 00:10:03.526 } 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.526 Adding namespace failed - expected result. 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.526 test case2: host connect to nvmf target in multiple paths 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.526 [2024-12-09 05:03:17.386030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:03.526 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.527 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.435 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:06.819 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.819 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.819 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.819 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.819 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:08.768 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:08.768 [global] 00:10:08.768 thread=1 00:10:08.768 invalidate=1 00:10:08.768 rw=write 00:10:08.768 time_based=1 00:10:08.768 runtime=1 00:10:08.768 ioengine=libaio 00:10:08.768 direct=1 00:10:08.768 bs=4096 00:10:08.768 iodepth=1 00:10:08.768 norandommap=0 00:10:08.768 numjobs=1 00:10:08.768 00:10:08.768 verify_dump=1 00:10:08.768 verify_backlog=512 00:10:08.768 verify_state_save=0 00:10:08.768 do_verify=1 00:10:08.768 verify=crc32c-intel 00:10:08.768 [job0] 00:10:08.768 filename=/dev/nvme0n1 00:10:08.768 Could not set queue depth (nvme0n1) 00:10:09.033 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.033 fio-3.35 00:10:09.033 Starting 1 thread 00:10:10.414 00:10:10.414 job0: (groupid=0, jobs=1): err= 0: pid=1395597: Mon Dec 9 05:03:24 2024 00:10:10.414 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:10.414 slat (nsec): min=8670, max=58629, avg=27000.29, stdev=4677.77 00:10:10.414 clat (usec): min=750, max=1207, avg=1003.10, stdev=79.43 00:10:10.414 lat (usec): min=759, max=1234, avg=1030.10, stdev=80.32 00:10:10.414 clat percentiles (usec): 00:10:10.414 | 1.00th=[ 791], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 947], 00:10:10.414 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1004], 60.00th=[ 1029], 00:10:10.414 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1123], 00:10:10.414 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:10.414 | 99.99th=[ 1205] 00:10:10.414 write: IOPS=697, BW=2789KiB/s (2856kB/s)(2792KiB/1001msec); 0 zone resets 00:10:10.414 slat (usec): min=9, max=27777, avg=70.70, stdev=1050.26 00:10:10.414 clat (usec): min=314, max=896, avg=591.32, stdev=108.66 00:10:10.414 lat (usec): min=323, max=28467, avg=662.02, stdev=1060.05 00:10:10.414 clat percentiles (usec): 00:10:10.415 | 1.00th=[ 343], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 494], 00:10:10.415 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 611], 00:10:10.415 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:10:10.415 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 898], 00:10:10.415 | 99.99th=[ 898] 00:10:10.415 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.415 lat (usec) : 500=12.48%, 750=40.83%, 1000=23.72% 00:10:10.415 lat (msec) : 2=22.98% 00:10:10.415 cpu : usr=2.50%, sys=4.60%, ctx=1215, majf=0, minf=1 00:10:10.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.415 issued rwts: total=512,698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.415 00:10:10.415 Run status group 0 (all jobs): 00:10:10.415 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:10.415 WRITE: bw=2789KiB/s (2856kB/s), 2789KiB/s-2789KiB/s (2856kB/s-2856kB/s), io=2792KiB (2859kB), run=1001-1001msec 00:10:10.415 00:10:10.415 Disk stats (read/write): 00:10:10.415 nvme0n1: ios=537/529, merge=0/0, ticks=1465/241, in_queue=1706, util=98.80% 00:10:10.415 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:10.415 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.415 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.415 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.415 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.674 rmmod nvme_tcp 00:10:10.674 rmmod nvme_fabrics 00:10:10.674 rmmod nvme_keyring 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1394194 ']' 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1394194 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1394194 ']' 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1394194 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394194 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394194' 00:10:10.674 killing process with pid 1394194 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1394194 00:10:10.674 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1394194 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.611 05:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.523 00:10:13.523 real 0m18.933s 00:10:13.523 user 0m47.359s 00:10:13.523 sys 0m6.952s 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.523 ************************************ 00:10:13.523 END TEST nvmf_nmic 00:10:13.523 ************************************ 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.523 ************************************ 00:10:13.523 START TEST nvmf_fio_target 00:10:13.523 ************************************ 00:10:13.523 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.785 * Looking for test storage... 00:10:13.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.785 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.786 --rc genhtml_branch_coverage=1 00:10:13.786 --rc genhtml_function_coverage=1 00:10:13.786 --rc genhtml_legend=1 00:10:13.786 --rc geninfo_all_blocks=1 00:10:13.786 --rc geninfo_unexecuted_blocks=1 00:10:13.786 00:10:13.786 ' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.786 --rc genhtml_branch_coverage=1 00:10:13.786 --rc genhtml_function_coverage=1 00:10:13.786 --rc genhtml_legend=1 00:10:13.786 --rc geninfo_all_blocks=1 00:10:13.786 --rc geninfo_unexecuted_blocks=1 00:10:13.786 00:10:13.786 ' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.786 --rc genhtml_branch_coverage=1 00:10:13.786 --rc genhtml_function_coverage=1 00:10:13.786 --rc genhtml_legend=1 00:10:13.786 --rc geninfo_all_blocks=1 00:10:13.786 --rc geninfo_unexecuted_blocks=1 00:10:13.786 00:10:13.786 ' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.786 --rc genhtml_branch_coverage=1 00:10:13.786 --rc genhtml_function_coverage=1 00:10:13.786 --rc genhtml_legend=1 00:10:13.786 --rc geninfo_all_blocks=1 00:10:13.786 --rc geninfo_unexecuted_blocks=1 00:10:13.786 00:10:13.786 ' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.786 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.927 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:21.928 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:21.928 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:21.928 Found net devices under 0000:31:00.0: cvl_0_0 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:21.928 Found net devices under 0000:31:00.1: cvl_0_1 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.928 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:10:21.928 00:10:21.928 --- 10.0.0.2 ping statistics --- 00:10:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.928 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:10:21.928 00:10:21.928 --- 10.0.0.1 ping statistics --- 00:10:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.928 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1400330 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1400330 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1400330 ']' 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.928 05:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.928 [2024-12-09 05:03:35.319658] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:21.928 [2024-12-09 05:03:35.319794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.928 [2024-12-09 05:03:35.483904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.928 [2024-12-09 05:03:35.615686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.929 [2024-12-09 05:03:35.615756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.929 [2024-12-09 05:03:35.615769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.929 [2024-12-09 05:03:35.615781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.929 [2024-12-09 05:03:35.615791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.929 [2024-12-09 05:03:35.618869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.929 [2024-12-09 05:03:35.618980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.929 [2024-12-09 05:03:35.619248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.929 [2024-12-09 05:03:35.619263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.188 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.447 [2024-12-09 05:03:36.316297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.447 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.707 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.707 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.966 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:22.966 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.226 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:23.226 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.487 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:23.487 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.748 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.009 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:24.009 05:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.269 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:24.269 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.529 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:24.529 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.529 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.807 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.807 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.067 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.067 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.067 05:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.329 [2024-12-09 05:03:39.122551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.329 05:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.589 05:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.589 05:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:27.500 05:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:29.412 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.412 [global] 00:10:29.412 thread=1 00:10:29.412 invalidate=1 00:10:29.412 rw=write 00:10:29.412 time_based=1 00:10:29.412 runtime=1 00:10:29.412 ioengine=libaio 00:10:29.412 direct=1 00:10:29.412 bs=4096 00:10:29.412 iodepth=1 00:10:29.412 norandommap=0 00:10:29.412 numjobs=1 00:10:29.412 00:10:29.412 verify_dump=1 00:10:29.412 verify_backlog=512 00:10:29.412 verify_state_save=0 00:10:29.412 do_verify=1 00:10:29.412 verify=crc32c-intel 00:10:29.412 [job0] 00:10:29.412 filename=/dev/nvme0n1 00:10:29.412 [job1] 00:10:29.412 filename=/dev/nvme0n2 00:10:29.412 [job2] 00:10:29.412 filename=/dev/nvme0n3 00:10:29.412 [job3] 00:10:29.412 filename=/dev/nvme0n4 00:10:29.412 Could not set queue depth (nvme0n1) 00:10:29.412 Could not set queue depth (nvme0n2) 00:10:29.412 Could not set queue depth (nvme0n3) 00:10:29.412 Could not set queue depth (nvme0n4) 00:10:29.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.672 fio-3.35 00:10:29.672 Starting 4 threads 00:10:31.082 00:10:31.082 job0: (groupid=0, jobs=1): err= 0: pid=1402136: Mon Dec 9 05:03:44 2024 00:10:31.082 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1015msec) 00:10:31.082 slat (nsec): min=15153, max=26818, avg=25297.17, stdev=3665.87 00:10:31.082 clat (usec): min=836, max=42871, avg=39716.23, stdev=9714.43 00:10:31.082 lat (usec): min=851, max=42897, avg=39741.53, stdev=9716.87 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 840], 5.00th=[ 840], 10.00th=[41157], 20.00th=[41681], 00:10:31.082 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:31.082 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:31.082 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:31.082 | 99.99th=[42730] 00:10:31.082 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:31.082 slat (usec): min=6, max=4664, avg=29.11, stdev=206.28 00:10:31.082 clat (usec): min=119, max=1054, avg=545.31, stdev=156.37 00:10:31.082 lat (usec): min=126, max=5476, avg=574.42, stdev=269.25 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 143], 5.00th=[ 289], 10.00th=[ 343], 20.00th=[ 420], 00:10:31.082 | 30.00th=[ 465], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 586], 00:10:31.082 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 750], 95.00th=[ 807], 00:10:31.082 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:31.082 | 99.99th=[ 1057] 00:10:31.082 bw ( KiB/s): min= 4096, max= 4096, per=40.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.082 lat (usec) : 250=3.02%, 500=32.45%, 750=51.51%, 1000=9.62% 00:10:31.082 lat (msec) : 2=0.19%, 50=3.21% 00:10:31.082 cpu : usr=0.69%, sys=0.59%, ctx=537, majf=0, minf=1 00:10:31.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.082 job1: (groupid=0, jobs=1): err= 0: pid=1402137: Mon Dec 9 05:03:44 2024 00:10:31.082 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:10:31.082 slat (nsec): min=25875, max=26880, avg=26328.11, stdev=275.46 00:10:31.082 clat (usec): min=626, max=42035, avg=39299.59, stdev=9659.81 00:10:31.082 lat (usec): min=652, max=42062, avg=39325.92, stdev=9659.79 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 627], 5.00th=[ 627], 10.00th=[41157], 20.00th=[41157], 00:10:31.082 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:31.082 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:31.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:31.082 | 99.99th=[42206] 00:10:31.082 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:31.082 slat (nsec): min=10566, max=68131, avg=33507.17, stdev=7685.52 00:10:31.082 clat (usec): min=245, max=889, avg=528.16, stdev=122.15 00:10:31.082 lat (usec): min=258, max=904, avg=561.67, stdev=123.88 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 269], 5.00th=[ 355], 10.00th=[ 383], 20.00th=[ 416], 00:10:31.082 | 30.00th=[ 449], 40.00th=[ 490], 50.00th=[ 515], 60.00th=[ 553], 00:10:31.082 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 734], 00:10:31.082 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 889], 99.95th=[ 889], 00:10:31.082 | 99.99th=[ 889] 00:10:31.082 bw ( KiB/s): min= 4096, max= 4096, per=40.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.082 lat (usec) : 250=0.38%, 500=42.08%, 750=50.94%, 1000=3.40% 00:10:31.082 lat (msec) : 50=3.21% 00:10:31.082 cpu : usr=0.80%, sys=1.70%, ctx=532, majf=0, minf=1 00:10:31.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.082 job2: (groupid=0, jobs=1): err= 0: pid=1402138: Mon Dec 9 05:03:44 2024 00:10:31.082 read: IOPS=665, BW=2661KiB/s (2725kB/s)(2664KiB/1001msec) 00:10:31.082 slat (nsec): min=7334, max=46804, avg=24258.80, stdev=7874.88 00:10:31.082 clat (usec): min=284, max=3037, avg=762.04, stdev=117.65 00:10:31.082 lat (usec): min=312, max=3064, avg=786.30, stdev=118.68 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 529], 5.00th=[ 619], 10.00th=[ 660], 20.00th=[ 701], 00:10:31.082 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:10:31.082 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:10:31.082 | 99.00th=[ 906], 99.50th=[ 906], 99.90th=[ 3032], 99.95th=[ 3032], 00:10:31.082 | 99.99th=[ 3032] 00:10:31.082 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:31.082 slat (nsec): min=9970, max=68923, avg=28669.45, stdev=11312.71 00:10:31.082 clat (usec): min=127, max=631, avg=422.68, stdev=69.40 00:10:31.082 lat (usec): min=137, max=666, avg=451.35, stdev=74.19 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 239], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 347], 00:10:31.082 | 30.00th=[ 375], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 457], 00:10:31.082 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 494], 95.00th=[ 510], 00:10:31.082 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 635], 00:10:31.082 | 99.99th=[ 635] 00:10:31.082 bw ( KiB/s): min= 4096, max= 4096, per=40.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.082 lat (usec) : 250=0.71%, 500=55.74%, 750=17.69%, 1000=25.80% 00:10:31.082 lat (msec) : 4=0.06% 00:10:31.082 cpu : usr=2.00%, sys=5.10%, ctx=1692, majf=0, minf=1 00:10:31.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 issued rwts: total=666,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.082 job3: (groupid=0, jobs=1): err= 0: pid=1402139: Mon Dec 9 05:03:44 2024 00:10:31.082 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1019msec) 00:10:31.082 slat (nsec): min=14469, max=29537, avg=26920.50, stdev=3198.02 00:10:31.082 clat (usec): min=942, max=42148, avg=39599.00, stdev=9651.28 00:10:31.082 lat (usec): min=971, max=42176, avg=39625.92, stdev=9650.67 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41681], 00:10:31.082 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:31.082 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:31.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:31.082 | 99.99th=[42206] 00:10:31.082 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:31.082 slat (nsec): min=10683, max=55698, avg=30324.72, stdev=9776.81 00:10:31.082 clat (usec): min=119, max=752, avg=553.40, stdev=104.03 00:10:31.082 lat (usec): min=132, max=764, avg=583.72, stdev=103.75 00:10:31.082 clat percentiles (usec): 00:10:31.082 | 1.00th=[ 161], 5.00th=[ 326], 10.00th=[ 424], 20.00th=[ 494], 00:10:31.082 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 594], 00:10:31.082 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 685], 00:10:31.082 | 99.00th=[ 709], 99.50th=[ 717], 99.90th=[ 750], 99.95th=[ 750], 00:10:31.082 | 99.99th=[ 750] 00:10:31.082 bw ( KiB/s): min= 4096, max= 4096, per=40.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.082 lat (usec) : 250=2.08%, 500=18.30%, 750=76.04%, 1000=0.38% 00:10:31.082 lat (msec) : 50=3.21% 00:10:31.082 cpu : usr=1.08%, sys=1.08%, ctx=532, majf=0, minf=1 00:10:31.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.082 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.082 00:10:31.082 Run status group 0 (all jobs): 00:10:31.082 READ: bw=2826KiB/s (2894kB/s), 70.7KiB/s-2661KiB/s (72.4kB/s-2725kB/s), io=2880KiB (2949kB), run=1001-1019msec 00:10:31.082 WRITE: bw=9.81MiB/s (10.3MB/s), 2010KiB/s-4092KiB/s (2058kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1019msec 00:10:31.082 00:10:31.082 Disk stats (read/write): 00:10:31.082 nvme0n1: ios=62/512, merge=0/0, ticks=687/261, in_queue=948, util=86.97% 00:10:31.082 nvme0n2: ios=63/512, merge=0/0, ticks=1374/246, in_queue=1620, util=88.07% 00:10:31.082 nvme0n3: ios=569/942, merge=0/0, ticks=1080/387, in_queue=1467, util=92.41% 00:10:31.082 nvme0n4: ios=70/512, merge=0/0, ticks=1556/270, in_queue=1826, util=94.04% 00:10:31.082 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.082 [global] 00:10:31.082 thread=1 00:10:31.082 invalidate=1 00:10:31.082 rw=randwrite 00:10:31.082 time_based=1 00:10:31.082 runtime=1 00:10:31.082 ioengine=libaio 00:10:31.082 direct=1 00:10:31.082 bs=4096 00:10:31.082 iodepth=1 00:10:31.082 norandommap=0 00:10:31.082 numjobs=1 00:10:31.082 00:10:31.082 verify_dump=1 00:10:31.082 verify_backlog=512 00:10:31.082 verify_state_save=0 00:10:31.082 do_verify=1 00:10:31.082 verify=crc32c-intel 00:10:31.082 [job0] 00:10:31.082 filename=/dev/nvme0n1 00:10:31.082 [job1] 00:10:31.082 filename=/dev/nvme0n2 00:10:31.082 [job2] 00:10:31.082 filename=/dev/nvme0n3 00:10:31.082 [job3] 00:10:31.082 filename=/dev/nvme0n4 00:10:31.083 Could not set queue depth (nvme0n1) 00:10:31.083 Could not set queue depth (nvme0n2) 00:10:31.083 Could not set queue depth (nvme0n3) 00:10:31.083 Could not set queue depth (nvme0n4) 00:10:31.340 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.340 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.340 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.340 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.340 fio-3.35 00:10:31.340 Starting 4 threads 00:10:32.749 00:10:32.749 job0: (groupid=0, jobs=1): err= 0: pid=1402661: Mon Dec 9 05:03:46 2024 00:10:32.749 read: IOPS=561, BW=2246KiB/s (2300kB/s)(2248KiB/1001msec) 00:10:32.749 slat (nsec): min=7092, max=45052, avg=23982.21, stdev=7185.86 00:10:32.749 clat (usec): min=455, max=41574, avg=840.03, stdev=1724.57 00:10:32.749 lat (usec): min=463, max=41585, avg=864.02, stdev=1724.13 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[ 693], 00:10:32.749 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:10:32.749 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 914], 95.00th=[ 955], 00:10:32.749 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[41681], 99.95th=[41681], 00:10:32.749 | 99.99th=[41681] 00:10:32.749 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:32.749 slat (nsec): min=9759, max=51123, avg=27428.33, stdev=10368.07 00:10:32.749 clat (usec): min=220, max=1016, avg=463.99, stdev=131.27 00:10:32.749 lat (usec): min=246, max=1049, avg=491.42, stdev=136.15 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 351], 00:10:32.749 | 30.00th=[ 396], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 482], 00:10:32.749 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[ 635], 95.00th=[ 709], 00:10:32.749 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 979], 99.95th=[ 1020], 00:10:32.749 | 99.99th=[ 1020] 00:10:32.749 bw ( KiB/s): min= 4096, max= 4096, per=32.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.749 lat (usec) : 250=0.50%, 500=43.25%, 750=33.48%, 1000=21.88% 00:10:32.749 lat (msec) : 2=0.82%, 50=0.06% 00:10:32.749 cpu : usr=2.40%, sys=4.10%, ctx=1589, majf=0, minf=1 00:10:32.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.749 issued rwts: total=562,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.749 job1: (groupid=0, jobs=1): err= 0: pid=1402662: Mon Dec 9 05:03:46 2024 00:10:32.749 read: IOPS=181, BW=728KiB/s (745kB/s)(744KiB/1022msec) 00:10:32.749 slat (nsec): min=7300, max=44148, avg=25355.08, stdev=4686.86 00:10:32.749 clat (usec): min=492, max=42002, avg=4195.08, stdev=10994.40 00:10:32.749 lat (usec): min=501, max=42028, avg=4220.44, stdev=10994.25 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 570], 5.00th=[ 693], 10.00th=[ 766], 20.00th=[ 848], 00:10:32.749 | 30.00th=[ 906], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1012], 00:10:32.749 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1205], 95.00th=[41681], 00:10:32.749 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.749 | 99.99th=[42206] 00:10:32.749 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:10:32.749 slat (nsec): min=9813, max=51193, avg=23520.37, stdev=11511.66 00:10:32.749 clat (usec): min=212, max=902, avg=429.58, stdev=124.95 00:10:32.749 lat (usec): min=245, max=935, avg=453.10, stdev=130.20 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 302], 00:10:32.749 | 30.00th=[ 330], 40.00th=[ 383], 50.00th=[ 424], 60.00th=[ 461], 00:10:32.749 | 70.00th=[ 490], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 668], 00:10:32.749 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 906], 99.95th=[ 906], 00:10:32.749 | 99.99th=[ 906] 00:10:32.749 bw ( KiB/s): min= 4096, max= 4096, per=32.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.749 lat (usec) : 250=0.14%, 500=53.72%, 750=20.77%, 1000=14.04% 00:10:32.749 lat (msec) : 2=9.17%, 50=2.15% 00:10:32.749 cpu : usr=0.88%, sys=1.57%, ctx=699, majf=0, minf=1 00:10:32.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.749 issued rwts: total=186,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.749 job2: (groupid=0, jobs=1): err= 0: pid=1402665: Mon Dec 9 05:03:46 2024 00:10:32.749 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:32.749 slat (nsec): min=25778, max=60121, avg=26917.91, stdev=3624.11 00:10:32.749 clat (usec): min=775, max=1308, avg=1032.14, stdev=94.98 00:10:32.749 lat (usec): min=802, max=1334, avg=1059.06, stdev=94.65 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 791], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 947], 00:10:32.749 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:10:32.749 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1172], 00:10:32.749 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:10:32.749 | 99.99th=[ 1303] 00:10:32.749 write: IOPS=699, BW=2797KiB/s (2864kB/s)(2800KiB/1001msec); 0 zone resets 00:10:32.749 slat (nsec): min=10129, max=56513, avg=31631.02, stdev=7547.52 00:10:32.749 clat (usec): min=226, max=1125, avg=607.44, stdev=126.49 00:10:32.749 lat (usec): min=237, max=1158, avg=639.07, stdev=128.37 00:10:32.749 clat percentiles (usec): 00:10:32.749 | 1.00th=[ 318], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 502], 00:10:32.749 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 627], 00:10:32.749 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 824], 00:10:32.749 | 99.00th=[ 963], 99.50th=[ 1012], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:32.749 | 99.99th=[ 1123] 00:10:32.749 bw ( KiB/s): min= 4096, max= 4096, per=32.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.750 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.750 lat (usec) : 250=0.08%, 500=10.97%, 750=40.43%, 1000=18.73% 00:10:32.750 lat (msec) : 2=29.79% 00:10:32.750 cpu : usr=2.30%, sys=3.30%, ctx=1213, majf=0, minf=2 00:10:32.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.750 issued rwts: total=512,700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.750 job3: (groupid=0, jobs=1): err= 0: pid=1402666: Mon Dec 9 05:03:46 2024 00:10:32.750 read: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec) 00:10:32.750 slat (nsec): min=7248, max=60331, avg=23562.23, stdev=8738.75 00:10:32.750 clat (usec): min=342, max=938, avg=776.27, stdev=64.68 00:10:32.750 lat (usec): min=351, max=965, avg=799.83, stdev=66.54 00:10:32.750 clat percentiles (usec): 00:10:32.750 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 725], 00:10:32.750 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 799], 00:10:32.750 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:10:32.750 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:10:32.750 | 99.99th=[ 938] 00:10:32.750 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:32.750 slat (nsec): min=9948, max=56028, avg=28661.01, stdev=10743.81 00:10:32.750 clat (usec): min=140, max=907, avg=441.02, stdev=80.12 00:10:32.750 lat (usec): min=151, max=941, avg=469.68, stdev=84.01 00:10:32.750 clat percentiles (usec): 00:10:32.750 | 1.00th=[ 273], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 355], 00:10:32.750 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 461], 00:10:32.750 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 570], 00:10:32.750 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 848], 99.95th=[ 906], 00:10:32.750 | 99.99th=[ 906] 00:10:32.750 bw ( KiB/s): min= 4096, max= 4096, per=32.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.750 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.750 lat (usec) : 250=0.36%, 500=50.12%, 750=22.44%, 1000=27.08% 00:10:32.750 cpu : usr=2.20%, sys=4.60%, ctx=1660, majf=0, minf=1 00:10:32.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.750 issued rwts: total=634,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.750 00:10:32.750 Run status group 0 (all jobs): 00:10:32.750 READ: bw=7413KiB/s (7591kB/s), 728KiB/s-2533KiB/s (745kB/s-2594kB/s), io=7576KiB (7758kB), run=1001-1022msec 00:10:32.750 WRITE: bw=12.5MiB/s (13.1MB/s), 2004KiB/s-4092KiB/s (2052kB/s-4190kB/s), io=12.7MiB (13.4MB), run=1001-1022msec 00:10:32.750 00:10:32.750 Disk stats (read/write): 00:10:32.750 nvme0n1: ios=553/677, merge=0/0, ticks=584/317, in_queue=901, util=95.99% 00:10:32.750 nvme0n2: ios=203/512, merge=0/0, ticks=1489/219, in_queue=1708, util=97.94% 00:10:32.750 nvme0n3: ios=497/512, merge=0/0, ticks=921/300, in_queue=1221, util=97.61% 00:10:32.750 nvme0n4: ios=547/813, merge=0/0, ticks=561/338, in_queue=899, util=100.00% 00:10:32.750 05:03:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:32.750 [global] 00:10:32.750 thread=1 00:10:32.750 invalidate=1 00:10:32.750 rw=write 00:10:32.750 time_based=1 00:10:32.750 runtime=1 00:10:32.750 ioengine=libaio 00:10:32.750 direct=1 00:10:32.750 bs=4096 00:10:32.750 iodepth=128 00:10:32.750 norandommap=0 00:10:32.750 numjobs=1 00:10:32.750 00:10:32.750 verify_dump=1 00:10:32.750 verify_backlog=512 00:10:32.750 verify_state_save=0 00:10:32.750 do_verify=1 00:10:32.750 verify=crc32c-intel 00:10:32.750 [job0] 00:10:32.750 filename=/dev/nvme0n1 00:10:32.750 [job1] 00:10:32.750 filename=/dev/nvme0n2 00:10:32.750 [job2] 00:10:32.750 filename=/dev/nvme0n3 00:10:32.750 [job3] 00:10:32.750 filename=/dev/nvme0n4 00:10:32.750 Could not set queue depth (nvme0n1) 00:10:32.750 Could not set queue depth (nvme0n2) 00:10:32.750 Could not set queue depth (nvme0n3) 00:10:32.750 Could not set queue depth (nvme0n4) 00:10:33.008 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.008 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.008 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.008 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.008 fio-3.35 00:10:33.008 Starting 4 threads 00:10:34.411 00:10:34.411 job0: (groupid=0, jobs=1): err= 0: pid=1403185: Mon Dec 9 05:03:48 2024 00:10:34.411 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:10:34.411 slat (nsec): min=963, max=19882k, avg=101251.42, stdev=781741.34 00:10:34.411 clat (usec): min=2915, max=57521, avg=12974.45, stdev=7792.37 00:10:34.411 lat (usec): min=2923, max=58437, avg=13075.70, stdev=7883.55 00:10:34.411 clat percentiles (usec): 00:10:34.411 | 1.00th=[ 4817], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7308], 00:10:34.411 | 30.00th=[ 7767], 40.00th=[ 8717], 50.00th=[10290], 60.00th=[13829], 00:10:34.411 | 70.00th=[14484], 80.00th=[17433], 90.00th=[21365], 95.00th=[30278], 00:10:34.411 | 99.00th=[41681], 99.50th=[51643], 99.90th=[57410], 99.95th=[57410], 00:10:34.411 | 99.99th=[57410] 00:10:34.411 write: IOPS=4195, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1012msec); 0 zone resets 00:10:34.411 slat (nsec): min=1731, max=17167k, avg=121374.06, stdev=726959.79 00:10:34.411 clat (usec): min=499, max=59761, avg=17627.20, stdev=15444.38 00:10:34.411 lat (usec): min=510, max=59774, avg=17748.57, stdev=15553.31 00:10:34.411 clat percentiles (usec): 00:10:34.411 | 1.00th=[ 1647], 5.00th=[ 3621], 10.00th=[ 4883], 20.00th=[ 6128], 00:10:34.411 | 30.00th=[ 7439], 40.00th=[ 9896], 50.00th=[11994], 60.00th=[14222], 00:10:34.411 | 70.00th=[18482], 80.00th=[27395], 90.00th=[49021], 95.00th=[54789], 00:10:34.411 | 99.00th=[57934], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:10:34.411 | 99.99th=[59507] 00:10:34.411 bw ( KiB/s): min=16384, max=16568, per=20.52%, avg=16476.00, stdev=130.11, samples=2 00:10:34.411 iops : min= 4096, max= 4142, avg=4119.00, stdev=32.53, samples=2 00:10:34.411 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.16% 00:10:34.411 lat (msec) : 2=0.65%, 4=2.55%, 10=42.78%, 20=34.91%, 50=13.68% 00:10:34.411 lat (msec) : 100=5.21% 00:10:34.411 cpu : usr=2.18%, sys=6.43%, ctx=318, majf=0, minf=1 00:10:34.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:34.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.411 issued rwts: total=4096,4246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.411 job1: (groupid=0, jobs=1): err= 0: pid=1403186: Mon Dec 9 05:03:48 2024 00:10:34.411 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:10:34.411 slat (nsec): min=994, max=16142k, avg=104946.25, stdev=765367.83 00:10:34.411 clat (usec): min=3547, max=98861, avg=12475.27, stdev=11406.87 00:10:34.411 lat (usec): min=3552, max=98870, avg=12580.22, stdev=11510.62 00:10:34.411 clat percentiles (usec): 00:10:34.411 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6718], 00:10:34.411 | 30.00th=[ 7046], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9765], 00:10:34.411 | 70.00th=[11338], 80.00th=[15795], 90.00th=[22676], 95.00th=[25297], 00:10:34.411 | 99.00th=[83362], 99.50th=[89654], 99.90th=[95945], 99.95th=[99091], 00:10:34.411 | 99.99th=[99091] 00:10:34.411 write: IOPS=4472, BW=17.5MiB/s (18.3MB/s)(17.7MiB/1012msec); 0 zone resets 00:10:34.411 slat (nsec): min=1641, max=12582k, avg=120106.85, stdev=700584.79 00:10:34.411 clat (usec): min=1117, max=107603, avg=17084.43, stdev=19188.93 00:10:34.411 lat (usec): min=1126, max=107609, avg=17204.54, stdev=19296.75 00:10:34.411 clat percentiles (msec): 00:10:34.411 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:10:34.411 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 9], 60.00th=[ 11], 00:10:34.411 | 70.00th=[ 19], 80.00th=[ 26], 90.00th=[ 39], 95.00th=[ 68], 00:10:34.411 | 99.00th=[ 92], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 108], 00:10:34.411 | 99.99th=[ 108] 00:10:34.411 bw ( KiB/s): min=16624, max=18568, per=21.91%, avg=17596.00, stdev=1374.62, samples=2 00:10:34.411 iops : min= 4156, max= 4642, avg=4399.00, stdev=343.65, samples=2 00:10:34.411 lat (msec) : 2=0.16%, 4=1.65%, 10=57.83%, 20=19.87%, 50=15.80% 00:10:34.411 lat (msec) : 100=4.43%, 250=0.27% 00:10:34.411 cpu : usr=3.46%, sys=5.24%, ctx=339, majf=0, minf=1 00:10:34.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.411 issued rwts: total=4096,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.411 job2: (groupid=0, jobs=1): err= 0: pid=1403187: Mon Dec 9 05:03:48 2024 00:10:34.411 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:10:34.411 slat (nsec): min=1003, max=16625k, avg=70471.31, stdev=583261.40 00:10:34.411 clat (usec): min=2523, max=33812, avg=9660.04, stdev=4067.02 00:10:34.411 lat (usec): min=2531, max=33840, avg=9730.51, stdev=4105.86 00:10:34.411 clat percentiles (usec): 00:10:34.411 | 1.00th=[ 5211], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6783], 00:10:34.411 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 9110], 00:10:34.411 | 70.00th=[10290], 80.00th=[12125], 90.00th=[15139], 95.00th=[18220], 00:10:34.411 | 99.00th=[24773], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:10:34.411 | 99.99th=[33817] 00:10:34.411 write: IOPS=6922, BW=27.0MiB/s (28.4MB/s)(27.2MiB/1005msec); 0 zone resets 00:10:34.411 slat (nsec): min=1667, max=10555k, avg=70775.90, stdev=500013.82 00:10:34.411 clat (usec): min=1159, max=59173, avg=9083.29, stdev=8246.80 00:10:34.411 lat (usec): min=1169, max=59177, avg=9154.06, stdev=8290.38 00:10:34.411 clat percentiles (usec): 00:10:34.411 | 1.00th=[ 3064], 5.00th=[ 4080], 10.00th=[ 4293], 20.00th=[ 5538], 00:10:34.411 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7439], 00:10:34.411 | 70.00th=[ 7701], 80.00th=[ 9503], 90.00th=[13435], 95.00th=[21103], 00:10:34.411 | 99.00th=[51119], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:10:34.411 | 99.99th=[58983] 00:10:34.411 bw ( KiB/s): min=25968, max=28672, per=34.02%, avg=27320.00, stdev=1912.02, samples=2 00:10:34.411 iops : min= 6492, max= 7168, avg=6830.00, stdev=478.00, samples=2 00:10:34.411 lat (msec) : 2=0.19%, 4=2.37%, 10=73.59%, 20=19.22%, 50=3.94% 00:10:34.411 lat (msec) : 100=0.69% 00:10:34.411 cpu : usr=5.98%, sys=6.67%, ctx=449, majf=0, minf=1 00:10:34.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:34.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.411 issued rwts: total=6656,6957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.411 job3: (groupid=0, jobs=1): err= 0: pid=1403188: Mon Dec 9 05:03:48 2024 00:10:34.411 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.1MiB/1013msec) 00:10:34.411 slat (usec): min=2, max=13383, avg=87.33, stdev=759.45 00:10:34.411 clat (usec): min=3652, max=35396, avg=12564.36, stdev=4713.15 00:10:34.411 lat (usec): min=3662, max=35421, avg=12651.69, stdev=4782.21 00:10:34.411 clat percentiles (usec): 00:10:34.412 | 1.00th=[ 6194], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 7963], 00:10:34.412 | 30.00th=[ 9372], 40.00th=[10814], 50.00th=[12518], 60.00th=[13566], 00:10:34.412 | 70.00th=[13829], 80.00th=[15270], 90.00th=[19006], 95.00th=[22676], 00:10:34.412 | 99.00th=[24249], 99.50th=[30278], 99.90th=[32637], 99.95th=[33424], 00:10:34.412 | 99.99th=[35390] 00:10:34.412 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec); 0 zone resets 00:10:34.412 slat (nsec): min=1683, max=11767k, avg=108606.47, stdev=681763.24 00:10:34.412 clat (usec): min=858, max=71611, avg=16610.30, stdev=14381.97 00:10:34.412 lat (usec): min=868, max=71621, avg=16718.91, stdev=14478.12 00:10:34.412 clat percentiles (usec): 00:10:34.412 | 1.00th=[ 3458], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 6718], 00:10:34.412 | 30.00th=[ 8029], 40.00th=[10028], 50.00th=[10814], 60.00th=[12649], 00:10:34.412 | 70.00th=[15795], 80.00th=[22676], 90.00th=[40109], 95.00th=[54264], 00:10:34.412 | 99.00th=[61080], 99.50th=[62129], 99.90th=[71828], 99.95th=[71828], 00:10:34.412 | 99.99th=[71828] 00:10:34.412 bw ( KiB/s): min=16496, max=19480, per=22.40%, avg=17988.00, stdev=2110.01, samples=2 00:10:34.412 iops : min= 4124, max= 4870, avg=4497.00, stdev=527.50, samples=2 00:10:34.412 lat (usec) : 1000=0.03% 00:10:34.412 lat (msec) : 2=0.19%, 4=0.50%, 10=37.71%, 20=45.56%, 50=12.32% 00:10:34.412 lat (msec) : 100=3.68% 00:10:34.412 cpu : usr=4.25%, sys=5.24%, ctx=311, majf=0, minf=1 00:10:34.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.412 issued rwts: total=4112,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.412 00:10:34.412 Run status group 0 (all jobs): 00:10:34.412 READ: bw=73.1MiB/s (76.7MB/s), 15.8MiB/s-25.9MiB/s (16.6MB/s-27.1MB/s), io=74.1MiB (77.7MB), run=1005-1013msec 00:10:34.412 WRITE: bw=78.4MiB/s (82.2MB/s), 16.4MiB/s-27.0MiB/s (17.2MB/s-28.4MB/s), io=79.4MiB (83.3MB), run=1005-1013msec 00:10:34.412 00:10:34.412 Disk stats (read/write): 00:10:34.412 nvme0n1: ios=3606/3647, merge=0/0, ticks=47624/52555, in_queue=100179, util=99.60% 00:10:34.412 nvme0n2: ios=3099/3079, merge=0/0, ticks=41904/61712, in_queue=103616, util=89.61% 00:10:34.412 nvme0n3: ios=5688/5943, merge=0/0, ticks=52371/47257, in_queue=99628, util=92.96% 00:10:34.412 nvme0n4: ios=3636/4043, merge=0/0, ticks=45535/57086, in_queue=102621, util=98.73% 00:10:34.412 05:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:34.412 [global] 00:10:34.412 thread=1 00:10:34.412 invalidate=1 00:10:34.412 rw=randwrite 00:10:34.412 time_based=1 00:10:34.412 runtime=1 00:10:34.412 ioengine=libaio 00:10:34.412 direct=1 00:10:34.412 bs=4096 00:10:34.412 iodepth=128 00:10:34.412 norandommap=0 00:10:34.412 numjobs=1 00:10:34.412 00:10:34.412 verify_dump=1 00:10:34.412 verify_backlog=512 00:10:34.412 verify_state_save=0 00:10:34.412 do_verify=1 00:10:34.412 verify=crc32c-intel 00:10:34.412 [job0] 00:10:34.412 filename=/dev/nvme0n1 00:10:34.412 [job1] 00:10:34.412 filename=/dev/nvme0n2 00:10:34.412 [job2] 00:10:34.412 filename=/dev/nvme0n3 00:10:34.412 [job3] 00:10:34.412 filename=/dev/nvme0n4 00:10:34.412 Could not set queue depth (nvme0n1) 00:10:34.412 Could not set queue depth (nvme0n2) 00:10:34.412 Could not set queue depth (nvme0n3) 00:10:34.412 Could not set queue depth (nvme0n4) 00:10:34.674 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.674 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.674 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.674 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.674 fio-3.35 00:10:34.674 Starting 4 threads 00:10:36.058 00:10:36.058 job0: (groupid=0, jobs=1): err= 0: pid=1403712: Mon Dec 9 05:03:49 2024 00:10:36.058 read: IOPS=3629, BW=14.2MiB/s (14.9MB/s)(14.8MiB/1045msec) 00:10:36.058 slat (nsec): min=916, max=19146k, avg=131421.71, stdev=962361.27 00:10:36.058 clat (usec): min=4580, max=67377, avg=16527.23, stdev=13749.75 00:10:36.058 lat (usec): min=4592, max=76583, avg=16658.65, stdev=13861.84 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 7177], 00:10:36.058 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 9503], 60.00th=[13304], 00:10:36.058 | 70.00th=[19530], 80.00th=[23987], 90.00th=[37487], 95.00th=[48497], 00:10:36.058 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:10:36.058 | 99.99th=[67634] 00:10:36.058 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:10:36.058 slat (nsec): min=1518, max=12739k, avg=117851.96, stdev=680404.95 00:10:36.058 clat (usec): min=4086, max=85735, avg=16956.07, stdev=13901.84 00:10:36.058 lat (usec): min=4180, max=85748, avg=17073.92, stdev=13994.41 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 4424], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7504], 00:10:36.058 | 30.00th=[ 8717], 40.00th=[11731], 50.00th=[13829], 60.00th=[15533], 00:10:36.058 | 70.00th=[17957], 80.00th=[20841], 90.00th=[27395], 95.00th=[54789], 00:10:36.058 | 99.00th=[74974], 99.50th=[76022], 99.90th=[85459], 99.95th=[85459], 00:10:36.058 | 99.99th=[85459] 00:10:36.058 bw ( KiB/s): min=15312, max=17456, per=19.00%, avg=16384.00, stdev=1516.04, samples=2 00:10:36.058 iops : min= 3828, max= 4364, avg=4096.00, stdev=379.01, samples=2 00:10:36.058 lat (msec) : 10=42.16%, 20=33.49%, 50=19.56%, 100=4.79% 00:10:36.058 cpu : usr=2.20%, sys=4.02%, ctx=401, majf=0, minf=1 00:10:36.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.058 issued rwts: total=3793,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.058 job1: (groupid=0, jobs=1): err= 0: pid=1403713: Mon Dec 9 05:03:49 2024 00:10:36.058 read: IOPS=6585, BW=25.7MiB/s (27.0MB/s)(25.9MiB/1005msec) 00:10:36.058 slat (nsec): min=899, max=20777k, avg=72268.13, stdev=647489.82 00:10:36.058 clat (usec): min=2266, max=40686, avg=9853.66, stdev=4419.02 00:10:36.058 lat (usec): min=2276, max=49104, avg=9925.93, stdev=4479.77 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 3982], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7308], 00:10:36.058 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9372], 00:10:36.058 | 70.00th=[10159], 80.00th=[11338], 90.00th=[14353], 95.00th=[19792], 00:10:36.058 | 99.00th=[29230], 99.50th=[30278], 99.90th=[30540], 99.95th=[34866], 00:10:36.058 | 99.99th=[40633] 00:10:36.058 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:10:36.058 slat (nsec): min=1580, max=12638k, avg=65706.35, stdev=502949.69 00:10:36.058 clat (usec): min=1191, max=59459, avg=9359.43, stdev=7027.17 00:10:36.058 lat (usec): min=1219, max=60739, avg=9425.14, stdev=7079.57 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 1893], 5.00th=[ 4015], 10.00th=[ 4817], 20.00th=[ 5932], 00:10:36.058 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7963], 00:10:36.058 | 70.00th=[ 9634], 80.00th=[11207], 90.00th=[15401], 95.00th=[18482], 00:10:36.058 | 99.00th=[47973], 99.50th=[53216], 99.90th=[59507], 99.95th=[59507], 00:10:36.058 | 99.99th=[59507] 00:10:36.058 bw ( KiB/s): min=20480, max=32768, per=30.88%, avg=26624.00, stdev=8688.93, samples=2 00:10:36.058 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:10:36.058 lat (msec) : 2=0.52%, 4=2.55%, 10=68.62%, 20=24.61%, 50=3.34% 00:10:36.058 lat (msec) : 100=0.37% 00:10:36.058 cpu : usr=4.58%, sys=7.67%, ctx=381, majf=0, minf=2 00:10:36.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.058 issued rwts: total=6618,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.058 job2: (groupid=0, jobs=1): err= 0: pid=1403715: Mon Dec 9 05:03:49 2024 00:10:36.058 read: IOPS=7138, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1003msec) 00:10:36.058 slat (nsec): min=1039, max=10667k, avg=72749.13, stdev=560829.21 00:10:36.058 clat (usec): min=1031, max=30302, avg=9241.82, stdev=3143.84 00:10:36.058 lat (usec): min=2341, max=30314, avg=9314.57, stdev=3182.97 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 4146], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7111], 00:10:36.058 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8848], 00:10:36.058 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[13698], 95.00th=[15270], 00:10:36.058 | 99.00th=[23200], 99.50th=[23200], 99.90th=[25297], 99.95th=[28181], 00:10:36.058 | 99.99th=[30278] 00:10:36.058 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:10:36.058 slat (nsec): min=1672, max=11729k, avg=61386.65, stdev=413451.58 00:10:36.058 clat (usec): min=1201, max=30287, avg=8498.17, stdev=3518.41 00:10:36.058 lat (usec): min=1212, max=30296, avg=8559.56, stdev=3544.61 00:10:36.058 clat percentiles (usec): 00:10:36.058 | 1.00th=[ 2933], 5.00th=[ 4113], 10.00th=[ 5080], 20.00th=[ 6390], 00:10:36.058 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8291], 00:10:36.058 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[13042], 95.00th=[17957], 00:10:36.058 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[21365], 00:10:36.058 | 99.99th=[30278] 00:10:36.058 bw ( KiB/s): min=24600, max=32744, per=33.25%, avg=28672.00, stdev=5758.68, samples=2 00:10:36.058 iops : min= 6150, max= 8186, avg=7168.00, stdev=1439.67, samples=2 00:10:36.058 lat (msec) : 2=0.12%, 4=2.23%, 10=77.80%, 20=18.68%, 50=1.18% 00:10:36.058 cpu : usr=4.59%, sys=8.38%, ctx=608, majf=0, minf=1 00:10:36.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:36.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.058 issued rwts: total=7160,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.058 job3: (groupid=0, jobs=1): err= 0: pid=1403716: Mon Dec 9 05:03:49 2024 00:10:36.058 read: IOPS=4577, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1005msec) 00:10:36.059 slat (nsec): min=954, max=24995k, avg=116574.61, stdev=886495.65 00:10:36.059 clat (usec): min=1157, max=72241, avg=15033.15, stdev=10265.38 00:10:36.059 lat (usec): min=6061, max=72266, avg=15149.72, stdev=10353.76 00:10:36.059 clat percentiles (usec): 00:10:36.059 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8717], 00:10:36.059 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[12125], 60.00th=[12911], 00:10:36.059 | 70.00th=[13435], 80.00th=[15139], 90.00th=[27657], 95.00th=[36963], 00:10:36.059 | 99.00th=[58983], 99.50th=[60031], 99.90th=[60031], 99.95th=[64226], 00:10:36.059 | 99.99th=[71828] 00:10:36.059 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:36.059 slat (nsec): min=1589, max=8320.5k, avg=96669.53, stdev=508694.26 00:10:36.059 clat (usec): min=4910, max=62641, avg=12505.72, stdev=7986.13 00:10:36.059 lat (usec): min=4935, max=66226, avg=12602.39, stdev=8043.63 00:10:36.059 clat percentiles (usec): 00:10:36.059 | 1.00th=[ 6259], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8029], 00:10:36.059 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[10552], 60.00th=[11731], 00:10:36.059 | 70.00th=[13304], 80.00th=[15139], 90.00th=[17171], 95.00th=[22676], 00:10:36.059 | 99.00th=[56886], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:10:36.059 | 99.99th=[62653] 00:10:36.059 bw ( KiB/s): min=16384, max=20480, per=21.38%, avg=18432.00, stdev=2896.31, samples=2 00:10:36.059 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:36.059 lat (msec) : 2=0.01%, 10=39.61%, 20=48.41%, 50=10.06%, 100=1.91% 00:10:36.059 cpu : usr=2.19%, sys=4.28%, ctx=525, majf=0, minf=1 00:10:36.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:36.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.059 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.059 00:10:36.059 Run status group 0 (all jobs): 00:10:36.059 READ: bw=82.9MiB/s (86.9MB/s), 14.2MiB/s-27.9MiB/s (14.9MB/s-29.2MB/s), io=86.6MiB (90.8MB), run=1003-1045msec 00:10:36.059 WRITE: bw=84.2MiB/s (88.3MB/s), 15.3MiB/s-27.9MiB/s (16.1MB/s-29.3MB/s), io=88.0MiB (92.3MB), run=1003-1045msec 00:10:36.059 00:10:36.059 Disk stats (read/write): 00:10:36.059 nvme0n1: ios=3122/3182, merge=0/0, ticks=18075/17089, in_queue=35164, util=84.77% 00:10:36.059 nvme0n2: ios=5674/5647, merge=0/0, ticks=51997/42622, in_queue=94619, util=88.47% 00:10:36.059 nvme0n3: ios=5683/6144, merge=0/0, ticks=49700/47254, in_queue=96954, util=95.11% 00:10:36.059 nvme0n4: ios=3641/3750, merge=0/0, ticks=21981/22633, in_queue=44614, util=97.56% 00:10:36.059 05:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:36.059 05:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1404046 00:10:36.059 05:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:36.059 05:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:36.059 [global] 00:10:36.059 thread=1 00:10:36.059 invalidate=1 00:10:36.059 rw=read 00:10:36.059 time_based=1 00:10:36.059 runtime=10 00:10:36.059 ioengine=libaio 00:10:36.059 direct=1 00:10:36.059 bs=4096 00:10:36.059 iodepth=1 00:10:36.059 norandommap=1 00:10:36.059 numjobs=1 00:10:36.059 00:10:36.059 [job0] 00:10:36.059 filename=/dev/nvme0n1 00:10:36.059 [job1] 00:10:36.059 filename=/dev/nvme0n2 00:10:36.059 [job2] 00:10:36.059 filename=/dev/nvme0n3 00:10:36.059 [job3] 00:10:36.059 filename=/dev/nvme0n4 00:10:36.059 Could not set queue depth (nvme0n1) 00:10:36.059 Could not set queue depth (nvme0n2) 00:10:36.059 Could not set queue depth (nvme0n3) 00:10:36.059 Could not set queue depth (nvme0n4) 00:10:36.318 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.319 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.319 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.319 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.319 fio-3.35 00:10:36.319 Starting 4 threads 00:10:38.861 05:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:39.121 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:10:39.121 fio: pid=1404243, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.121 05:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:39.381 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16297984, buflen=4096 00:10:39.381 fio: pid=1404242, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.381 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.382 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.382 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7626752, buflen=4096 00:10:39.382 fio: pid=1404240, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.382 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.382 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:39.641 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=425984, buflen=4096 00:10:39.641 fio: pid=1404241, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.641 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.641 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:39.641 00:10:39.641 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1404240: Mon Dec 9 05:03:53 2024 00:10:39.641 read: IOPS=626, BW=2503KiB/s (2563kB/s)(7448KiB/2976msec) 00:10:39.641 slat (usec): min=6, max=34296, avg=65.82, stdev=1015.89 00:10:39.641 clat (usec): min=436, max=42503, avg=1513.76, stdev=4208.61 00:10:39.641 lat (usec): min=463, max=42530, avg=1579.60, stdev=4325.86 00:10:39.641 clat percentiles (usec): 00:10:39.641 | 1.00th=[ 725], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 996], 00:10:39.641 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:39.641 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:10:39.641 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:39.641 | 99.99th=[42730] 00:10:39.641 bw ( KiB/s): min= 176, max= 3720, per=32.59%, avg=2435.20, stdev=1690.45, samples=5 00:10:39.641 iops : min= 44, max= 930, avg=608.80, stdev=422.61, samples=5 00:10:39.641 lat (usec) : 500=0.11%, 750=1.07%, 1000=19.97% 00:10:39.641 lat (msec) : 2=77.67%, 50=1.13% 00:10:39.641 cpu : usr=1.24%, sys=2.42%, ctx=1868, majf=0, minf=1 00:10:39.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.641 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.641 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.641 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1404241: Mon Dec 9 05:03:53 2024 00:10:39.641 read: IOPS=32, BW=129KiB/s (132kB/s)(416KiB/3216msec) 00:10:39.641 slat (usec): min=8, max=732, avg=37.54, stdev=74.12 00:10:39.641 clat (usec): min=536, max=64796, avg=30653.90, stdev=18555.93 00:10:39.641 lat (usec): min=566, max=64824, avg=30691.54, stdev=18562.00 00:10:39.641 clat percentiles (usec): 00:10:39.641 | 1.00th=[ 545], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1074], 00:10:39.641 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:39.641 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:39.642 | 99.00th=[44303], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:10:39.642 | 99.99th=[64750] 00:10:39.642 bw ( KiB/s): min= 88, max= 312, per=1.77%, avg=132.00, stdev=88.33, samples=6 00:10:39.642 iops : min= 22, max= 78, avg=33.00, stdev=22.08, samples=6 00:10:39.642 lat (usec) : 750=2.86%, 1000=9.52% 00:10:39.642 lat (msec) : 2=14.29%, 10=0.95%, 50=70.48%, 100=0.95% 00:10:39.642 cpu : usr=0.16%, sys=0.00%, ctx=110, majf=0, minf=2 00:10:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.642 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1404242: Mon Dec 9 05:03:53 2024 00:10:39.642 read: IOPS=1423, BW=5694KiB/s (5831kB/s)(15.5MiB/2795msec) 00:10:39.642 slat (usec): min=6, max=6895, avg=26.14, stdev=137.64 00:10:39.642 clat (usec): min=165, max=42006, avg=664.15, stdev=1216.49 00:10:39.642 lat (usec): min=173, max=42033, avg=690.29, stdev=1225.69 00:10:39.642 clat percentiles (usec): 00:10:39.642 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 519], 00:10:39.642 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 627], 00:10:39.642 | 70.00th=[ 644], 80.00th=[ 668], 90.00th=[ 873], 95.00th=[ 1020], 00:10:39.642 | 99.00th=[ 1172], 99.50th=[ 1254], 99.90th=[28705], 99.95th=[41157], 00:10:39.642 | 99.99th=[42206] 00:10:39.642 bw ( KiB/s): min= 3096, max= 6792, per=79.20%, avg=5918.40, stdev=1581.97, samples=5 00:10:39.642 iops : min= 774, max= 1698, avg=1479.60, stdev=395.49, samples=5 00:10:39.642 lat (usec) : 250=0.28%, 500=15.53%, 750=70.98%, 1000=6.98% 00:10:39.642 lat (msec) : 2=6.11%, 50=0.10% 00:10:39.642 cpu : usr=1.68%, sys=3.87%, ctx=3983, majf=0, minf=2 00:10:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 issued rwts: total=3980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.642 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1404243: Mon Dec 9 05:03:53 2024 00:10:39.642 read: IOPS=24, BW=96.4KiB/s (98.8kB/s)(252KiB/2613msec) 00:10:39.642 slat (nsec): min=10324, max=38891, avg=26025.50, stdev=2568.75 00:10:39.642 clat (usec): min=796, max=42075, avg=41089.70, stdev=5175.73 00:10:39.642 lat (usec): min=834, max=42099, avg=41115.73, stdev=5174.08 00:10:39.642 clat percentiles (usec): 00:10:39.642 | 1.00th=[ 799], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:39.642 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:39.642 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:39.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.642 | 99.99th=[42206] 00:10:39.642 bw ( KiB/s): min= 96, max= 96, per=1.28%, avg=96.00, stdev= 0.00, samples=5 00:10:39.642 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:39.642 lat (usec) : 1000=1.56% 00:10:39.642 lat (msec) : 50=96.88% 00:10:39.642 cpu : usr=0.11%, sys=0.00%, ctx=64, majf=0, minf=2 00:10:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.642 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.642 00:10:39.642 Run status group 0 (all jobs): 00:10:39.642 READ: bw=7473KiB/s (7652kB/s), 96.4KiB/s-5694KiB/s (98.8kB/s-5831kB/s), io=23.5MiB (24.6MB), run=2613-3216msec 00:10:39.642 00:10:39.642 Disk stats (read/write): 00:10:39.642 nvme0n1: ios=1759/0, merge=0/0, ticks=2563/0, in_queue=2563, util=92.65% 00:10:39.642 nvme0n2: ios=137/0, merge=0/0, ticks=4046/0, in_queue=4046, util=99.69% 00:10:39.642 nvme0n3: ios=3789/0, merge=0/0, ticks=2386/0, in_queue=2386, util=96.07% 00:10:39.642 nvme0n4: ios=63/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.43% 00:10:39.901 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.901 05:03:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.161 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.161 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:40.422 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.422 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1404046 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:40.682 05:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:41.251 nvmf hotplug test: fio failed as expected 00:10:41.251 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.510 rmmod nvme_tcp 00:10:41.510 rmmod nvme_fabrics 00:10:41.510 rmmod nvme_keyring 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1400330 ']' 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1400330 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1400330 ']' 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1400330 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.510 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400330 00:10:41.770 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.770 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.770 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400330' 00:10:41.770 killing process with pid 1400330 00:10:41.770 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1400330 00:10:41.770 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1400330 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.339 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.262 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.262 00:10:44.262 real 0m30.766s 00:10:44.262 user 2m34.252s 00:10:44.262 sys 0m10.076s 00:10:44.262 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.263 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.263 ************************************ 00:10:44.263 END TEST nvmf_fio_target 00:10:44.263 ************************************ 00:10:44.263 05:03:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.263 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.263 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.263 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.524 ************************************ 00:10:44.524 START TEST nvmf_bdevio 00:10:44.524 ************************************ 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.524 * Looking for test storage... 00:10:44.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.524 --rc genhtml_branch_coverage=1 00:10:44.524 --rc genhtml_function_coverage=1 00:10:44.524 --rc genhtml_legend=1 00:10:44.524 --rc geninfo_all_blocks=1 00:10:44.524 --rc geninfo_unexecuted_blocks=1 00:10:44.524 00:10:44.524 ' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.524 --rc genhtml_branch_coverage=1 00:10:44.524 --rc genhtml_function_coverage=1 00:10:44.524 --rc genhtml_legend=1 00:10:44.524 --rc geninfo_all_blocks=1 00:10:44.524 --rc geninfo_unexecuted_blocks=1 00:10:44.524 00:10:44.524 ' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.524 --rc genhtml_branch_coverage=1 00:10:44.524 --rc genhtml_function_coverage=1 00:10:44.524 --rc genhtml_legend=1 00:10:44.524 --rc geninfo_all_blocks=1 00:10:44.524 --rc geninfo_unexecuted_blocks=1 00:10:44.524 00:10:44.524 ' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.524 --rc genhtml_branch_coverage=1 00:10:44.524 --rc genhtml_function_coverage=1 00:10:44.524 --rc genhtml_legend=1 00:10:44.524 --rc geninfo_all_blocks=1 00:10:44.524 --rc geninfo_unexecuted_blocks=1 00:10:44.524 00:10:44.524 ' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.524 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.525 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.785 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.785 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.785 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.785 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.918 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:52.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:52.919 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:52.919 Found net devices under 0000:31:00.0: cvl_0_0 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:52.919 Found net devices under 0000:31:00.1: cvl_0_1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:10:52.919 00:10:52.919 --- 10.0.0.2 ping statistics --- 00:10:52.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.919 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:52.919 00:10:52.919 --- 10.0.0.1 ping statistics --- 00:10:52.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.919 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1409635 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1409635 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1409635 ']' 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.919 05:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.919 [2024-12-09 05:04:06.069468] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:52.919 [2024-12-09 05:04:06.069600] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.919 [2024-12-09 05:04:06.216145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.919 [2024-12-09 05:04:06.338128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.919 [2024-12-09 05:04:06.338195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.919 [2024-12-09 05:04:06.338208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.919 [2024-12-09 05:04:06.338220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.919 [2024-12-09 05:04:06.338231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.920 [2024-12-09 05:04:06.341067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.920 [2024-12-09 05:04:06.341304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:52.920 [2024-12-09 05:04:06.341412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.920 [2024-12-09 05:04:06.341430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.920 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.920 [2024-12-09 05:04:06.884448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.180 Malloc0 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.180 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.180 [2024-12-09 05:04:07.011120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.180 { 00:10:53.180 "params": { 00:10:53.180 "name": "Nvme$subsystem", 00:10:53.180 "trtype": "$TEST_TRANSPORT", 00:10:53.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.180 "adrfam": "ipv4", 00:10:53.180 "trsvcid": "$NVMF_PORT", 00:10:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.180 "hdgst": ${hdgst:-false}, 00:10:53.180 "ddgst": ${ddgst:-false} 00:10:53.180 }, 00:10:53.180 "method": "bdev_nvme_attach_controller" 00:10:53.180 } 00:10:53.180 EOF 00:10:53.180 )") 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:53.180 05:04:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.180 "params": { 00:10:53.180 "name": "Nvme1", 00:10:53.180 "trtype": "tcp", 00:10:53.180 "traddr": "10.0.0.2", 00:10:53.180 "adrfam": "ipv4", 00:10:53.180 "trsvcid": "4420", 00:10:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.180 "hdgst": false, 00:10:53.180 "ddgst": false 00:10:53.180 }, 00:10:53.180 "method": "bdev_nvme_attach_controller" 00:10:53.180 }' 00:10:53.180 [2024-12-09 05:04:07.110570] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:53.180 [2024-12-09 05:04:07.110696] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409962 ] 00:10:53.439 [2024-12-09 05:04:07.268434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.439 [2024-12-09 05:04:07.397407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.439 [2024-12-09 05:04:07.397520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.439 [2024-12-09 05:04:07.397544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.021 I/O targets: 00:10:54.021 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:54.021 00:10:54.021 00:10:54.021 CUnit - A unit testing framework for C - Version 2.1-3 00:10:54.021 http://cunit.sourceforge.net/ 00:10:54.021 00:10:54.021 00:10:54.021 Suite: bdevio tests on: Nvme1n1 00:10:54.021 Test: blockdev write read block ...passed 00:10:54.021 Test: blockdev write zeroes read block ...passed 00:10:54.021 Test: blockdev write zeroes read no split ...passed 00:10:54.021 Test: blockdev write zeroes read split ...passed 00:10:54.021 Test: blockdev write zeroes read split partial ...passed 00:10:54.021 Test: blockdev reset ...[2024-12-09 05:04:07.972350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:54.021 [2024-12-09 05:04:07.972532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000394200 (9): Bad file descriptor 00:10:54.021 [2024-12-09 05:04:07.993267] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:54.021 passed 00:10:54.021 Test: blockdev write read 8 blocks ...passed 00:10:54.021 Test: blockdev write read size > 128k ...passed 00:10:54.021 Test: blockdev write read invalid size ...passed 00:10:54.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:54.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:54.282 Test: blockdev write read max offset ...passed 00:10:54.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:54.282 Test: blockdev writev readv 8 blocks ...passed 00:10:54.282 Test: blockdev writev readv 30 x 1block ...passed 00:10:54.282 Test: blockdev writev readv block ...passed 00:10:54.282 Test: blockdev writev readv size > 128k ...passed 00:10:54.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:54.282 Test: blockdev comparev and writev ...[2024-12-09 05:04:08.220346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.220406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.220440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.220454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.221041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.221098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.221110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.221702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.221721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.221735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.222334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.222364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:54.282 [2024-12-09 05:04:08.222384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.282 [2024-12-09 05:04:08.222398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:54.282 passed 00:10:54.543 Test: blockdev nvme passthru rw ...passed 00:10:54.543 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:04:08.306822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.543 [2024-12-09 05:04:08.306866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:54.543 [2024-12-09 05:04:08.307318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.543 [2024-12-09 05:04:08.307339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:54.543 [2024-12-09 05:04:08.307740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.543 [2024-12-09 05:04:08.307758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:54.543 [2024-12-09 05:04:08.308167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.543 [2024-12-09 05:04:08.308188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:54.543 passed 00:10:54.543 Test: blockdev nvme admin passthru ...passed 00:10:54.543 Test: blockdev copy ...passed 00:10:54.543 00:10:54.543 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.543 suites 1 1 n/a 0 0 00:10:54.543 tests 23 23 23 0 0 00:10:54.543 asserts 152 152 152 0 n/a 00:10:54.543 00:10:54.543 Elapsed time = 1.271 seconds 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.115 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.115 rmmod nvme_tcp 00:10:55.115 rmmod nvme_fabrics 00:10:55.115 rmmod nvme_keyring 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1409635 ']' 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1409635 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1409635 ']' 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1409635 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1409635 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1409635' 00:10:55.376 killing process with pid 1409635 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1409635 00:10:55.376 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1409635 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.947 05:04:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.493 00:10:58.493 real 0m13.651s 00:10:58.493 user 0m19.519s 00:10:58.493 sys 0m6.510s 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.493 ************************************ 00:10:58.493 END TEST nvmf_bdevio 00:10:58.493 ************************************ 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:58.493 00:10:58.493 real 5m18.583s 00:10:58.493 user 12m19.489s 00:10:58.493 sys 1m54.441s 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.493 05:04:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.493 ************************************ 00:10:58.493 END TEST nvmf_target_core 00:10:58.493 ************************************ 00:10:58.493 05:04:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.493 05:04:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.493 05:04:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.493 05:04:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.493 ************************************ 00:10:58.493 START TEST nvmf_target_extra 00:10:58.493 ************************************ 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.493 * Looking for test storage... 00:10:58.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:58.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.493 --rc genhtml_branch_coverage=1 00:10:58.493 --rc genhtml_function_coverage=1 00:10:58.493 --rc genhtml_legend=1 00:10:58.493 --rc geninfo_all_blocks=1 00:10:58.493 --rc geninfo_unexecuted_blocks=1 00:10:58.493 00:10:58.493 ' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:58.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.493 --rc genhtml_branch_coverage=1 00:10:58.493 --rc genhtml_function_coverage=1 00:10:58.493 --rc genhtml_legend=1 00:10:58.493 --rc geninfo_all_blocks=1 00:10:58.493 --rc geninfo_unexecuted_blocks=1 00:10:58.493 00:10:58.493 ' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:58.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.493 --rc genhtml_branch_coverage=1 00:10:58.493 --rc genhtml_function_coverage=1 00:10:58.493 --rc genhtml_legend=1 00:10:58.493 --rc geninfo_all_blocks=1 00:10:58.493 --rc geninfo_unexecuted_blocks=1 00:10:58.493 00:10:58.493 ' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:58.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.493 --rc genhtml_branch_coverage=1 00:10:58.493 --rc genhtml_function_coverage=1 00:10:58.493 --rc genhtml_legend=1 00:10:58.493 --rc geninfo_all_blocks=1 00:10:58.493 --rc geninfo_unexecuted_blocks=1 00:10:58.493 00:10:58.493 ' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.493 05:04:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.494 ************************************ 00:10:58.494 START TEST nvmf_example 00:10:58.494 ************************************ 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:58.494 * Looking for test storage... 00:10:58.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:58.494 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:58.755 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.756 --rc genhtml_branch_coverage=1 00:10:58.756 --rc genhtml_function_coverage=1 00:10:58.756 --rc genhtml_legend=1 00:10:58.756 --rc geninfo_all_blocks=1 00:10:58.756 --rc geninfo_unexecuted_blocks=1 00:10:58.756 00:10:58.756 ' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.756 --rc genhtml_branch_coverage=1 00:10:58.756 --rc genhtml_function_coverage=1 00:10:58.756 --rc genhtml_legend=1 00:10:58.756 --rc geninfo_all_blocks=1 00:10:58.756 --rc geninfo_unexecuted_blocks=1 00:10:58.756 00:10:58.756 ' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.756 --rc genhtml_branch_coverage=1 00:10:58.756 --rc genhtml_function_coverage=1 00:10:58.756 --rc genhtml_legend=1 00:10:58.756 --rc geninfo_all_blocks=1 00:10:58.756 --rc geninfo_unexecuted_blocks=1 00:10:58.756 00:10:58.756 ' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.756 --rc genhtml_branch_coverage=1 00:10:58.756 --rc genhtml_function_coverage=1 00:10:58.756 --rc genhtml_legend=1 00:10:58.756 --rc geninfo_all_blocks=1 00:10:58.756 --rc geninfo_unexecuted_blocks=1 00:10:58.756 00:10:58.756 ' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.756 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.917 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:06.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:06.918 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:06.918 Found net devices under 0000:31:00.0: cvl_0_0 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:06.918 Found net devices under 0000:31:00.1: cvl_0_1 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.918 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:11:06.918 00:11:06.918 --- 10.0.0.2 ping statistics --- 00:11:06.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.918 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:11:06.918 00:11:06.918 --- 10.0.0.1 ping statistics --- 00:11:06.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.918 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1414763 00:11:06.918 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1414763 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1414763 ']' 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.919 05:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.180 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.180 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:07.180 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:07.180 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.180 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:07.441 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:19.717 Initializing NVMe Controllers 00:11:19.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:19.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:19.717 Initialization complete. Launching workers. 00:11:19.717 ======================================================== 00:11:19.717 Latency(us) 00:11:19.717 Device Information : IOPS MiB/s Average min max 00:11:19.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16883.11 65.95 3790.38 699.31 16584.61 00:11:19.717 ======================================================== 00:11:19.717 Total : 16883.11 65.95 3790.38 699.31 16584.61 00:11:19.717 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.717 rmmod nvme_tcp 00:11:19.717 rmmod nvme_fabrics 00:11:19.717 rmmod nvme_keyring 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1414763 ']' 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1414763 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1414763 ']' 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1414763 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1414763 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1414763' 00:11:19.717 killing process with pid 1414763 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1414763 00:11:19.717 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1414763 00:11:19.717 nvmf threads initialize successfully 00:11:19.717 bdev subsystem init successfully 00:11:19.717 created a nvmf target service 00:11:19.717 create targets's poll groups done 00:11:19.717 all subsystems of target started 00:11:19.717 nvmf target is running 00:11:19.717 all subsystems of target stopped 00:11:19.717 destroy targets's poll groups done 00:11:19.717 destroyed the nvmf target service 00:11:19.717 bdev subsystem finish successfully 00:11:19.717 nvmf threads destroy successfully 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.717 05:04:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.659 00:11:20.659 real 0m22.215s 00:11:20.659 user 0m48.366s 00:11:20.659 sys 0m7.332s 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.659 ************************************ 00:11:20.659 END TEST nvmf_example 00:11:20.659 ************************************ 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.659 ************************************ 00:11:20.659 START TEST nvmf_filesystem 00:11:20.659 ************************************ 00:11:20.659 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:20.930 * Looking for test storage... 00:11:20.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.930 --rc genhtml_branch_coverage=1 00:11:20.930 --rc genhtml_function_coverage=1 00:11:20.930 --rc genhtml_legend=1 00:11:20.930 --rc geninfo_all_blocks=1 00:11:20.930 --rc geninfo_unexecuted_blocks=1 00:11:20.930 00:11:20.930 ' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.930 --rc genhtml_branch_coverage=1 00:11:20.930 --rc genhtml_function_coverage=1 00:11:20.930 --rc genhtml_legend=1 00:11:20.930 --rc geninfo_all_blocks=1 00:11:20.930 --rc geninfo_unexecuted_blocks=1 00:11:20.930 00:11:20.930 ' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.930 --rc genhtml_branch_coverage=1 00:11:20.930 --rc genhtml_function_coverage=1 00:11:20.930 --rc genhtml_legend=1 00:11:20.930 --rc geninfo_all_blocks=1 00:11:20.930 --rc geninfo_unexecuted_blocks=1 00:11:20.930 00:11:20.930 ' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.930 --rc genhtml_branch_coverage=1 00:11:20.930 --rc genhtml_function_coverage=1 00:11:20.930 --rc genhtml_legend=1 00:11:20.930 --rc geninfo_all_blocks=1 00:11:20.930 --rc geninfo_unexecuted_blocks=1 00:11:20.930 00:11:20.930 ' 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:20.930 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:20.931 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:20.932 #define SPDK_CONFIG_H 00:11:20.932 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:20.932 #define SPDK_CONFIG_APPS 1 00:11:20.932 #define SPDK_CONFIG_ARCH native 00:11:20.932 #define SPDK_CONFIG_ASAN 1 00:11:20.932 #undef SPDK_CONFIG_AVAHI 00:11:20.932 #undef SPDK_CONFIG_CET 00:11:20.932 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:20.932 #define SPDK_CONFIG_COVERAGE 1 00:11:20.932 #define SPDK_CONFIG_CROSS_PREFIX 00:11:20.932 #undef SPDK_CONFIG_CRYPTO 00:11:20.932 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:20.932 #undef SPDK_CONFIG_CUSTOMOCF 00:11:20.932 #undef SPDK_CONFIG_DAOS 00:11:20.932 #define SPDK_CONFIG_DAOS_DIR 00:11:20.932 #define SPDK_CONFIG_DEBUG 1 00:11:20.932 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:20.932 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:20.932 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:20.932 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:20.932 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:20.932 #undef SPDK_CONFIG_DPDK_UADK 00:11:20.932 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:20.932 #define SPDK_CONFIG_EXAMPLES 1 00:11:20.932 #undef SPDK_CONFIG_FC 00:11:20.932 #define SPDK_CONFIG_FC_PATH 00:11:20.932 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:20.932 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:20.932 #define SPDK_CONFIG_FSDEV 1 00:11:20.932 #undef SPDK_CONFIG_FUSE 00:11:20.932 #undef SPDK_CONFIG_FUZZER 00:11:20.932 #define SPDK_CONFIG_FUZZER_LIB 00:11:20.932 #undef SPDK_CONFIG_GOLANG 00:11:20.932 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:20.932 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:20.932 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:20.932 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:20.932 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:20.932 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:20.932 #undef SPDK_CONFIG_HAVE_LZ4 00:11:20.932 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:20.932 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:20.932 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:20.932 #define SPDK_CONFIG_IDXD 1 00:11:20.932 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:20.932 #undef SPDK_CONFIG_IPSEC_MB 00:11:20.932 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:20.932 #define SPDK_CONFIG_ISAL 1 00:11:20.932 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:20.932 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:20.932 #define SPDK_CONFIG_LIBDIR 00:11:20.932 #undef SPDK_CONFIG_LTO 00:11:20.932 #define SPDK_CONFIG_MAX_LCORES 128 00:11:20.932 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:20.932 #define SPDK_CONFIG_NVME_CUSE 1 00:11:20.932 #undef SPDK_CONFIG_OCF 00:11:20.932 #define SPDK_CONFIG_OCF_PATH 00:11:20.932 #define SPDK_CONFIG_OPENSSL_PATH 00:11:20.932 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:20.932 #define SPDK_CONFIG_PGO_DIR 00:11:20.932 #undef SPDK_CONFIG_PGO_USE 00:11:20.932 #define SPDK_CONFIG_PREFIX /usr/local 00:11:20.932 #undef SPDK_CONFIG_RAID5F 00:11:20.932 #undef SPDK_CONFIG_RBD 00:11:20.932 #define SPDK_CONFIG_RDMA 1 00:11:20.932 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:20.932 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:20.932 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:20.932 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:20.932 #define SPDK_CONFIG_SHARED 1 00:11:20.932 #undef SPDK_CONFIG_SMA 00:11:20.932 #define SPDK_CONFIG_TESTS 1 00:11:20.932 #undef SPDK_CONFIG_TSAN 00:11:20.932 #define SPDK_CONFIG_UBLK 1 00:11:20.932 #define SPDK_CONFIG_UBSAN 1 00:11:20.932 #undef SPDK_CONFIG_UNIT_TESTS 00:11:20.932 #undef SPDK_CONFIG_URING 00:11:20.932 #define SPDK_CONFIG_URING_PATH 00:11:20.932 #undef SPDK_CONFIG_URING_ZNS 00:11:20.932 #undef SPDK_CONFIG_USDT 00:11:20.932 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:20.932 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:20.932 #undef SPDK_CONFIG_VFIO_USER 00:11:20.932 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:20.932 #define SPDK_CONFIG_VHOST 1 00:11:20.932 #define SPDK_CONFIG_VIRTIO 1 00:11:20.932 #undef SPDK_CONFIG_VTUNE 00:11:20.932 #define SPDK_CONFIG_VTUNE_DIR 00:11:20.932 #define SPDK_CONFIG_WERROR 1 00:11:20.932 #define SPDK_CONFIG_WPDK_DIR 00:11:20.932 #undef SPDK_CONFIG_XNVME 00:11:20.932 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.932 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:20.933 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:21.238 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.239 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1417634 ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1417634 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.bAhKka 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bAhKka/tests/target /tmp/spdk.bAhKka 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:21.240 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122391064576 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356533760 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6965469184 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666898432 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23371776 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=387072 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=116736 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677949440 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=319488 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:21.241 * Looking for test storage... 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122391064576 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9180061696 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.241 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.241 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:21.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.242 --rc genhtml_branch_coverage=1 00:11:21.242 --rc genhtml_function_coverage=1 00:11:21.242 --rc genhtml_legend=1 00:11:21.242 --rc geninfo_all_blocks=1 00:11:21.242 --rc geninfo_unexecuted_blocks=1 00:11:21.242 00:11:21.242 ' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:21.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.242 --rc genhtml_branch_coverage=1 00:11:21.242 --rc genhtml_function_coverage=1 00:11:21.242 --rc genhtml_legend=1 00:11:21.242 --rc geninfo_all_blocks=1 00:11:21.242 --rc geninfo_unexecuted_blocks=1 00:11:21.242 00:11:21.242 ' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:21.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.242 --rc genhtml_branch_coverage=1 00:11:21.242 --rc genhtml_function_coverage=1 00:11:21.242 --rc genhtml_legend=1 00:11:21.242 --rc geninfo_all_blocks=1 00:11:21.242 --rc geninfo_unexecuted_blocks=1 00:11:21.242 00:11:21.242 ' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:21.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.242 --rc genhtml_branch_coverage=1 00:11:21.242 --rc genhtml_function_coverage=1 00:11:21.242 --rc genhtml_legend=1 00:11:21.242 --rc geninfo_all_blocks=1 00:11:21.242 --rc geninfo_unexecuted_blocks=1 00:11:21.242 00:11:21.242 ' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.242 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:29.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:29.674 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:29.674 Found net devices under 0000:31:00.0: cvl_0_0 00:11:29.674 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:29.675 Found net devices under 0000:31:00.1: cvl_0_1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:11:29.675 00:11:29.675 --- 10.0.0.2 ping statistics --- 00:11:29.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.675 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:11:29.675 00:11:29.675 --- 10.0.0.1 ping statistics --- 00:11:29.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.675 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.675 ************************************ 00:11:29.675 START TEST nvmf_filesystem_no_in_capsule 00:11:29.675 ************************************ 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1421560 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1421560 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1421560 ']' 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.675 05:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.675 [2024-12-09 05:04:42.903506] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:29.675 [2024-12-09 05:04:42.903633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.675 [2024-12-09 05:04:43.065761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.675 [2024-12-09 05:04:43.199528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.675 [2024-12-09 05:04:43.199593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.675 [2024-12-09 05:04:43.199607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.675 [2024-12-09 05:04:43.199620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.675 [2024-12-09 05:04:43.199630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.675 [2024-12-09 05:04:43.202779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.675 [2024-12-09 05:04:43.202929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.675 [2024-12-09 05:04:43.203002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.675 [2024-12-09 05:04:43.203021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.962 [2024-12-09 05:04:43.747732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.962 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.315 Malloc1 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.315 [2024-12-09 05:04:44.241647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:30.315 { 00:11:30.315 "name": "Malloc1", 00:11:30.315 "aliases": [ 00:11:30.315 "f3a5543e-3a04-4de9-99ed-aa3600576d62" 00:11:30.315 ], 00:11:30.315 "product_name": "Malloc disk", 00:11:30.315 "block_size": 512, 00:11:30.315 "num_blocks": 1048576, 00:11:30.315 "uuid": "f3a5543e-3a04-4de9-99ed-aa3600576d62", 00:11:30.315 "assigned_rate_limits": { 00:11:30.315 "rw_ios_per_sec": 0, 00:11:30.315 "rw_mbytes_per_sec": 0, 00:11:30.315 "r_mbytes_per_sec": 0, 00:11:30.315 "w_mbytes_per_sec": 0 00:11:30.315 }, 00:11:30.315 "claimed": true, 00:11:30.315 "claim_type": "exclusive_write", 00:11:30.315 "zoned": false, 00:11:30.315 "supported_io_types": { 00:11:30.315 "read": true, 00:11:30.315 "write": true, 00:11:30.315 "unmap": true, 00:11:30.315 "flush": true, 00:11:30.315 "reset": true, 00:11:30.315 "nvme_admin": false, 00:11:30.315 "nvme_io": false, 00:11:30.315 "nvme_io_md": false, 00:11:30.315 "write_zeroes": true, 00:11:30.315 "zcopy": true, 00:11:30.315 "get_zone_info": false, 00:11:30.315 "zone_management": false, 00:11:30.315 "zone_append": false, 00:11:30.315 "compare": false, 00:11:30.315 "compare_and_write": false, 00:11:30.315 "abort": true, 00:11:30.315 "seek_hole": false, 00:11:30.315 "seek_data": false, 00:11:30.315 "copy": true, 00:11:30.315 "nvme_iov_md": false 00:11:30.315 }, 00:11:30.315 "memory_domains": [ 00:11:30.315 { 00:11:30.315 "dma_device_id": "system", 00:11:30.315 "dma_device_type": 1 00:11:30.315 }, 00:11:30.315 { 00:11:30.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.315 "dma_device_type": 2 00:11:30.315 } 00:11:30.315 ], 00:11:30.315 "driver_specific": {} 00:11:30.315 } 00:11:30.315 ]' 00:11:30.315 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:30.577 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.961 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.961 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:31.961 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.961 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:31.961 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:34.503 05:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:34.503 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:34.503 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.887 ************************************ 00:11:35.887 START TEST filesystem_ext4 00:11:35.887 ************************************ 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:35.887 mke2fs 1.47.0 (5-Feb-2023) 00:11:35.887 Discarding device blocks: 0/522240 done 00:11:35.887 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:35.887 Filesystem UUID: 8e3042cc-d4ee-49f6-bb11-7d769c9ba609 00:11:35.887 Superblock backups stored on blocks: 00:11:35.887 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:35.887 00:11:35.887 Allocating group tables: 0/64 done 00:11:35.887 Writing inode tables: 0/64 done 00:11:35.887 Creating journal (8192 blocks): done 00:11:35.887 Writing superblocks and filesystem accounting information: 0/64 done 00:11:35.887 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:35.887 05:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1421560 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.465 00:11:42.465 real 0m6.294s 00:11:42.465 user 0m0.023s 00:11:42.465 sys 0m0.084s 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:42.465 ************************************ 00:11:42.465 END TEST filesystem_ext4 00:11:42.465 ************************************ 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.465 ************************************ 00:11:42.465 START TEST filesystem_btrfs 00:11:42.465 ************************************ 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:42.465 05:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:42.465 btrfs-progs v6.8.1 00:11:42.465 See https://btrfs.readthedocs.io for more information. 00:11:42.465 00:11:42.465 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:42.465 NOTE: several default settings have changed in version 5.15, please make sure 00:11:42.465 this does not affect your deployments: 00:11:42.465 - DUP for metadata (-m dup) 00:11:42.465 - enabled no-holes (-O no-holes) 00:11:42.465 - enabled free-space-tree (-R free-space-tree) 00:11:42.465 00:11:42.465 Label: (null) 00:11:42.465 UUID: ed87a177-959f-459d-9020-f65228bcc4e0 00:11:42.465 Node size: 16384 00:11:42.465 Sector size: 4096 (CPU page size: 4096) 00:11:42.465 Filesystem size: 510.00MiB 00:11:42.465 Block group profiles: 00:11:42.465 Data: single 8.00MiB 00:11:42.465 Metadata: DUP 32.00MiB 00:11:42.465 System: DUP 8.00MiB 00:11:42.465 SSD detected: yes 00:11:42.465 Zoned device: no 00:11:42.465 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:42.465 Checksum: crc32c 00:11:42.465 Number of devices: 1 00:11:42.465 Devices: 00:11:42.465 ID SIZE PATH 00:11:42.465 1 510.00MiB /dev/nvme0n1p1 00:11:42.465 00:11:42.465 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:42.465 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1421560 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.036 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.036 00:11:43.036 real 0m0.938s 00:11:43.036 user 0m0.024s 00:11:43.036 sys 0m0.122s 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.037 ************************************ 00:11:43.037 END TEST filesystem_btrfs 00:11:43.037 ************************************ 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.037 ************************************ 00:11:43.037 START TEST filesystem_xfs 00:11:43.037 ************************************ 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.037 05:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:43.037 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:43.037 = sectsz=512 attr=2, projid32bit=1 00:11:43.037 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:43.037 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:43.037 data = bsize=4096 blocks=130560, imaxpct=25 00:11:43.037 = sunit=0 swidth=0 blks 00:11:43.037 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:43.037 log =internal log bsize=4096 blocks=16384, version=2 00:11:43.037 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:43.037 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:44.421 Discarding blocks...Done. 00:11:44.421 05:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.421 05:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.965 00:11:46.965 real 0m3.731s 00:11:46.965 user 0m0.023s 00:11:46.965 sys 0m0.084s 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.965 ************************************ 00:11:46.965 END TEST filesystem_xfs 00:11:46.965 ************************************ 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1421560 ']' 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1421560' 00:11:46.965 killing process with pid 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1421560 00:11:46.965 05:05:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1421560 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.347 00:11:48.347 real 0m19.404s 00:11:48.347 user 1m15.215s 00:11:48.347 sys 0m1.658s 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 ************************************ 00:11:48.347 END TEST nvmf_filesystem_no_in_capsule 00:11:48.347 ************************************ 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 ************************************ 00:11:48.347 START TEST nvmf_filesystem_in_capsule 00:11:48.347 ************************************ 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1425484 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1425484 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1425484 ']' 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.347 05:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.607 [2024-12-09 05:05:02.377396] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:48.607 [2024-12-09 05:05:02.377494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.607 [2024-12-09 05:05:02.496507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.607 [2024-12-09 05:05:02.573204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.607 [2024-12-09 05:05:02.573245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.607 [2024-12-09 05:05:02.573254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.607 [2024-12-09 05:05:02.573262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.607 [2024-12-09 05:05:02.573269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.607 [2024-12-09 05:05:02.575112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.607 [2024-12-09 05:05:02.575232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.607 [2024-12-09 05:05:02.575322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.607 [2024-12-09 05:05:02.575348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.179 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.179 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.179 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.179 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.179 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.439 [2024-12-09 05:05:03.192902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.439 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 Malloc1 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 [2024-12-09 05:05:03.546682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.700 { 00:11:49.700 "name": "Malloc1", 00:11:49.700 "aliases": [ 00:11:49.700 "a773fc79-2b4a-4eb6-a227-9094d068237c" 00:11:49.700 ], 00:11:49.700 "product_name": "Malloc disk", 00:11:49.700 "block_size": 512, 00:11:49.700 "num_blocks": 1048576, 00:11:49.700 "uuid": "a773fc79-2b4a-4eb6-a227-9094d068237c", 00:11:49.700 "assigned_rate_limits": { 00:11:49.700 "rw_ios_per_sec": 0, 00:11:49.700 "rw_mbytes_per_sec": 0, 00:11:49.700 "r_mbytes_per_sec": 0, 00:11:49.700 "w_mbytes_per_sec": 0 00:11:49.700 }, 00:11:49.700 "claimed": true, 00:11:49.700 "claim_type": "exclusive_write", 00:11:49.700 "zoned": false, 00:11:49.700 "supported_io_types": { 00:11:49.700 "read": true, 00:11:49.700 "write": true, 00:11:49.700 "unmap": true, 00:11:49.700 "flush": true, 00:11:49.700 "reset": true, 00:11:49.700 "nvme_admin": false, 00:11:49.700 "nvme_io": false, 00:11:49.700 "nvme_io_md": false, 00:11:49.700 "write_zeroes": true, 00:11:49.700 "zcopy": true, 00:11:49.700 "get_zone_info": false, 00:11:49.700 "zone_management": false, 00:11:49.700 "zone_append": false, 00:11:49.700 "compare": false, 00:11:49.700 "compare_and_write": false, 00:11:49.700 "abort": true, 00:11:49.700 "seek_hole": false, 00:11:49.700 "seek_data": false, 00:11:49.700 "copy": true, 00:11:49.700 "nvme_iov_md": false 00:11:49.700 }, 00:11:49.700 "memory_domains": [ 00:11:49.700 { 00:11:49.700 "dma_device_id": "system", 00:11:49.700 "dma_device_type": 1 00:11:49.700 }, 00:11:49.700 { 00:11:49.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.700 "dma_device_type": 2 00:11:49.700 } 00:11:49.700 ], 00:11:49.700 "driver_specific": {} 00:11:49.700 } 00:11:49.700 ]' 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.700 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.701 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.701 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.701 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.701 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.701 05:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.614 05:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.614 05:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.614 05:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.614 05:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.614 05:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.529 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.791 05:05:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.744 ************************************ 00:11:54.744 START TEST filesystem_in_capsule_ext4 00:11:54.744 ************************************ 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.744 05:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.744 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.744 Discarding device blocks: 0/522240 done 00:11:54.744 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.744 Filesystem UUID: ac2fed85-e18a-4ab3-be4e-9c8bd98b15cc 00:11:54.744 Superblock backups stored on blocks: 00:11:54.744 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.744 00:11:54.744 Allocating group tables: 0/64 done 00:11:54.744 Writing inode tables: 0/64 done 00:11:55.003 Creating journal (8192 blocks): done 00:11:57.327 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:57.327 00:11:57.327 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.327 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.613 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1425484 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.874 00:12:02.874 real 0m8.086s 00:12:02.874 user 0m0.039s 00:12:02.874 sys 0m0.070s 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.874 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.874 ************************************ 00:12:02.874 END TEST filesystem_in_capsule_ext4 00:12:02.874 ************************************ 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.875 ************************************ 00:12:02.875 START TEST filesystem_in_capsule_btrfs 00:12:02.875 ************************************ 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:02.875 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.137 btrfs-progs v6.8.1 00:12:03.137 See https://btrfs.readthedocs.io for more information. 00:12:03.137 00:12:03.137 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.137 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.137 this does not affect your deployments: 00:12:03.137 - DUP for metadata (-m dup) 00:12:03.137 - enabled no-holes (-O no-holes) 00:12:03.137 - enabled free-space-tree (-R free-space-tree) 00:12:03.137 00:12:03.137 Label: (null) 00:12:03.137 UUID: 24ffa9c8-fece-4acd-9a13-a9f5902a938e 00:12:03.137 Node size: 16384 00:12:03.137 Sector size: 4096 (CPU page size: 4096) 00:12:03.137 Filesystem size: 510.00MiB 00:12:03.137 Block group profiles: 00:12:03.137 Data: single 8.00MiB 00:12:03.137 Metadata: DUP 32.00MiB 00:12:03.137 System: DUP 8.00MiB 00:12:03.137 SSD detected: yes 00:12:03.137 Zoned device: no 00:12:03.137 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.137 Checksum: crc32c 00:12:03.137 Number of devices: 1 00:12:03.137 Devices: 00:12:03.137 ID SIZE PATH 00:12:03.137 1 510.00MiB /dev/nvme0n1p1 00:12:03.137 00:12:03.137 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.137 05:05:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1425484 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.398 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.658 00:12:03.658 real 0m0.620s 00:12:03.658 user 0m0.035s 00:12:03.658 sys 0m0.112s 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.658 ************************************ 00:12:03.658 END TEST filesystem_in_capsule_btrfs 00:12:03.658 ************************************ 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.658 ************************************ 00:12:03.658 START TEST filesystem_in_capsule_xfs 00:12:03.658 ************************************ 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.658 05:05:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.658 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.658 = sectsz=512 attr=2, projid32bit=1 00:12:03.658 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.658 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.658 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.658 = sunit=0 swidth=0 blks 00:12:03.658 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.658 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.658 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.658 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.596 Discarding blocks...Done. 00:12:04.596 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.596 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1425484 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.505 00:12:06.505 real 0m3.013s 00:12:06.505 user 0m0.019s 00:12:06.505 sys 0m0.086s 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.505 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 ************************************ 00:12:06.505 END TEST filesystem_in_capsule_xfs 00:12:06.505 ************************************ 00:12:06.765 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.025 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.025 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.284 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.284 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.284 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.284 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1425484 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1425484 ']' 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1425484 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1425484 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1425484' 00:12:07.285 killing process with pid 1425484 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1425484 00:12:07.285 05:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1425484 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:08.666 00:12:08.666 real 0m20.146s 00:12:08.666 user 1m18.606s 00:12:08.666 sys 0m1.531s 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.666 ************************************ 00:12:08.666 END TEST nvmf_filesystem_in_capsule 00:12:08.666 ************************************ 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.666 rmmod nvme_tcp 00:12:08.666 rmmod nvme_fabrics 00:12:08.666 rmmod nvme_keyring 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.666 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.213 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.213 00:12:11.213 real 0m49.993s 00:12:11.213 user 2m36.262s 00:12:11.213 sys 0m9.147s 00:12:11.213 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.213 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.213 ************************************ 00:12:11.213 END TEST nvmf_filesystem 00:12:11.213 ************************************ 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.214 ************************************ 00:12:11.214 START TEST nvmf_target_discovery 00:12:11.214 ************************************ 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:11.214 * Looking for test storage... 00:12:11.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.214 --rc genhtml_branch_coverage=1 00:12:11.214 --rc genhtml_function_coverage=1 00:12:11.214 --rc genhtml_legend=1 00:12:11.214 --rc geninfo_all_blocks=1 00:12:11.214 --rc geninfo_unexecuted_blocks=1 00:12:11.214 00:12:11.214 ' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.214 --rc genhtml_branch_coverage=1 00:12:11.214 --rc genhtml_function_coverage=1 00:12:11.214 --rc genhtml_legend=1 00:12:11.214 --rc geninfo_all_blocks=1 00:12:11.214 --rc geninfo_unexecuted_blocks=1 00:12:11.214 00:12:11.214 ' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.214 --rc genhtml_branch_coverage=1 00:12:11.214 --rc genhtml_function_coverage=1 00:12:11.214 --rc genhtml_legend=1 00:12:11.214 --rc geninfo_all_blocks=1 00:12:11.214 --rc geninfo_unexecuted_blocks=1 00:12:11.214 00:12:11.214 ' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.214 --rc genhtml_branch_coverage=1 00:12:11.214 --rc genhtml_function_coverage=1 00:12:11.214 --rc genhtml_legend=1 00:12:11.214 --rc geninfo_all_blocks=1 00:12:11.214 --rc geninfo_unexecuted_blocks=1 00:12:11.214 00:12:11.214 ' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:11.214 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.215 05:05:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.360 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.360 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.360 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.360 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:19.361 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:19.361 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:19.361 Found net devices under 0000:31:00.0: cvl_0_0 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:19.361 Found net devices under 0000:31:00.1: cvl_0_1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:12:19.361 00:12:19.361 --- 10.0.0.2 ping statistics --- 00:12:19.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.361 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:19.361 00:12:19.361 --- 10.0.0.1 ping statistics --- 00:12:19.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.361 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1433752 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1433752 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1433752 ']' 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.361 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.361 [2024-12-09 05:05:32.633449] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:19.361 [2024-12-09 05:05:32.633583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.361 [2024-12-09 05:05:32.800955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.361 [2024-12-09 05:05:32.929662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.361 [2024-12-09 05:05:32.929728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.361 [2024-12-09 05:05:32.929742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.361 [2024-12-09 05:05:32.929755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.361 [2024-12-09 05:05:32.929765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.361 [2024-12-09 05:05:32.932762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.361 [2024-12-09 05:05:32.932940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.361 [2024-12-09 05:05:32.933011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.361 [2024-12-09 05:05:32.933033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 [2024-12-09 05:05:33.470649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 Null1 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 [2024-12-09 05:05:33.546555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 Null2 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.623 Null3 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.623 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 Null4 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.885 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:12:20.157 00:12:20.157 Discovery Log Number of Records 6, Generation counter 6 00:12:20.157 =====Discovery Log Entry 0====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: current discovery subsystem 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4420 00:12:20.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: explicit discovery connections, duplicate discovery information 00:12:20.157 sectype: none 00:12:20.157 =====Discovery Log Entry 1====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: nvme subsystem 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4420 00:12:20.157 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: none 00:12:20.157 sectype: none 00:12:20.157 =====Discovery Log Entry 2====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: nvme subsystem 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4420 00:12:20.157 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: none 00:12:20.157 sectype: none 00:12:20.157 =====Discovery Log Entry 3====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: nvme subsystem 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4420 00:12:20.157 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: none 00:12:20.157 sectype: none 00:12:20.157 =====Discovery Log Entry 4====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: nvme subsystem 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4420 00:12:20.157 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: none 00:12:20.157 sectype: none 00:12:20.157 =====Discovery Log Entry 5====== 00:12:20.157 trtype: tcp 00:12:20.157 adrfam: ipv4 00:12:20.157 subtype: discovery subsystem referral 00:12:20.157 treq: not required 00:12:20.157 portid: 0 00:12:20.157 trsvcid: 4430 00:12:20.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.157 traddr: 10.0.0.2 00:12:20.157 eflags: none 00:12:20.157 sectype: none 00:12:20.157 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:20.157 Perform nvmf subsystem discovery via RPC 00:12:20.157 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:20.157 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.157 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.157 [ 00:12:20.157 { 00:12:20.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.157 "subtype": "Discovery", 00:12:20.157 "listen_addresses": [ 00:12:20.157 { 00:12:20.157 "trtype": "TCP", 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.2", 00:12:20.157 "trsvcid": "4420" 00:12:20.157 } 00:12:20.157 ], 00:12:20.157 "allow_any_host": true, 00:12:20.157 "hosts": [] 00:12:20.157 }, 00:12:20.157 { 00:12:20.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.157 "subtype": "NVMe", 00:12:20.157 "listen_addresses": [ 00:12:20.157 { 00:12:20.157 "trtype": "TCP", 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.2", 00:12:20.157 "trsvcid": "4420" 00:12:20.157 } 00:12:20.157 ], 00:12:20.157 "allow_any_host": true, 00:12:20.157 "hosts": [], 00:12:20.157 "serial_number": "SPDK00000000000001", 00:12:20.157 "model_number": "SPDK bdev Controller", 00:12:20.157 "max_namespaces": 32, 00:12:20.157 "min_cntlid": 1, 00:12:20.157 "max_cntlid": 65519, 00:12:20.157 "namespaces": [ 00:12:20.157 { 00:12:20.157 "nsid": 1, 00:12:20.157 "bdev_name": "Null1", 00:12:20.157 "name": "Null1", 00:12:20.157 "nguid": "281AB389207B4F1B9BC0544AAD1C45DC", 00:12:20.157 "uuid": "281ab389-207b-4f1b-9bc0-544aad1c45dc" 00:12:20.157 } 00:12:20.157 ] 00:12:20.157 }, 00:12:20.157 { 00:12:20.157 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:20.157 "subtype": "NVMe", 00:12:20.157 "listen_addresses": [ 00:12:20.157 { 00:12:20.157 "trtype": "TCP", 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.2", 00:12:20.157 "trsvcid": "4420" 00:12:20.157 } 00:12:20.157 ], 00:12:20.157 "allow_any_host": true, 00:12:20.157 "hosts": [], 00:12:20.157 "serial_number": "SPDK00000000000002", 00:12:20.157 "model_number": "SPDK bdev Controller", 00:12:20.157 "max_namespaces": 32, 00:12:20.157 "min_cntlid": 1, 00:12:20.157 "max_cntlid": 65519, 00:12:20.157 "namespaces": [ 00:12:20.157 { 00:12:20.157 "nsid": 1, 00:12:20.157 "bdev_name": "Null2", 00:12:20.157 "name": "Null2", 00:12:20.157 "nguid": "BD5B72AE0EB1469E957D2A2DD4B8ED4F", 00:12:20.157 "uuid": "bd5b72ae-0eb1-469e-957d-2a2dd4b8ed4f" 00:12:20.157 } 00:12:20.157 ] 00:12:20.157 }, 00:12:20.157 { 00:12:20.157 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:20.157 "subtype": "NVMe", 00:12:20.157 "listen_addresses": [ 00:12:20.157 { 00:12:20.157 "trtype": "TCP", 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.2", 00:12:20.157 "trsvcid": "4420" 00:12:20.157 } 00:12:20.157 ], 00:12:20.157 "allow_any_host": true, 00:12:20.157 "hosts": [], 00:12:20.157 "serial_number": "SPDK00000000000003", 00:12:20.157 "model_number": "SPDK bdev Controller", 00:12:20.158 "max_namespaces": 32, 00:12:20.158 "min_cntlid": 1, 00:12:20.158 "max_cntlid": 65519, 00:12:20.158 "namespaces": [ 00:12:20.158 { 00:12:20.158 "nsid": 1, 00:12:20.158 "bdev_name": "Null3", 00:12:20.158 "name": "Null3", 00:12:20.158 "nguid": "A428CBC3AF1E42B896054FB0CC45A9D4", 00:12:20.158 "uuid": "a428cbc3-af1e-42b8-9605-4fb0cc45a9d4" 00:12:20.158 } 00:12:20.158 ] 00:12:20.158 }, 00:12:20.158 { 00:12:20.158 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:20.158 "subtype": "NVMe", 00:12:20.158 "listen_addresses": [ 00:12:20.158 { 00:12:20.158 "trtype": "TCP", 00:12:20.158 "adrfam": "IPv4", 00:12:20.158 "traddr": "10.0.0.2", 00:12:20.158 "trsvcid": "4420" 00:12:20.158 } 00:12:20.158 ], 00:12:20.158 "allow_any_host": true, 00:12:20.158 "hosts": [], 00:12:20.158 "serial_number": "SPDK00000000000004", 00:12:20.158 "model_number": "SPDK bdev Controller", 00:12:20.158 "max_namespaces": 32, 00:12:20.158 "min_cntlid": 1, 00:12:20.158 "max_cntlid": 65519, 00:12:20.158 "namespaces": [ 00:12:20.158 { 00:12:20.158 "nsid": 1, 00:12:20.158 "bdev_name": "Null4", 00:12:20.158 "name": "Null4", 00:12:20.158 "nguid": "739D7B82CEF24678829B67D432B84CEB", 00:12:20.158 "uuid": "739d7b82-cef2-4678-829b-67d432b84ceb" 00:12:20.158 } 00:12:20.158 ] 00:12:20.158 } 00:12:20.158 ] 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.158 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.158 rmmod nvme_tcp 00:12:20.418 rmmod nvme_fabrics 00:12:20.418 rmmod nvme_keyring 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1433752 ']' 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1433752 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1433752 ']' 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1433752 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1433752 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1433752' 00:12:20.418 killing process with pid 1433752 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1433752 00:12:20.418 05:05:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1433752 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.362 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.271 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.272 00:12:23.272 real 0m12.542s 00:12:23.272 user 0m10.437s 00:12:23.272 sys 0m6.275s 00:12:23.272 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.272 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.272 ************************************ 00:12:23.272 END TEST nvmf_target_discovery 00:12:23.272 ************************************ 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.532 ************************************ 00:12:23.532 START TEST nvmf_referrals 00:12:23.532 ************************************ 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.532 * Looking for test storage... 00:12:23.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.532 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.792 --rc genhtml_branch_coverage=1 00:12:23.792 --rc genhtml_function_coverage=1 00:12:23.792 --rc genhtml_legend=1 00:12:23.792 --rc geninfo_all_blocks=1 00:12:23.792 --rc geninfo_unexecuted_blocks=1 00:12:23.792 00:12:23.792 ' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.792 --rc genhtml_branch_coverage=1 00:12:23.792 --rc genhtml_function_coverage=1 00:12:23.792 --rc genhtml_legend=1 00:12:23.792 --rc geninfo_all_blocks=1 00:12:23.792 --rc geninfo_unexecuted_blocks=1 00:12:23.792 00:12:23.792 ' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.792 --rc genhtml_branch_coverage=1 00:12:23.792 --rc genhtml_function_coverage=1 00:12:23.792 --rc genhtml_legend=1 00:12:23.792 --rc geninfo_all_blocks=1 00:12:23.792 --rc geninfo_unexecuted_blocks=1 00:12:23.792 00:12:23.792 ' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.792 --rc genhtml_branch_coverage=1 00:12:23.792 --rc genhtml_function_coverage=1 00:12:23.792 --rc genhtml_legend=1 00:12:23.792 --rc geninfo_all_blocks=1 00:12:23.792 --rc geninfo_unexecuted_blocks=1 00:12:23.792 00:12:23.792 ' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.792 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.793 05:05:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:31.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:31.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:31.935 Found net devices under 0000:31:00.0: cvl_0_0 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:31.935 Found net devices under 0000:31:00.1: cvl_0_1 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.935 05:05:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:12:31.935 00:12:31.935 --- 10.0.0.2 ping statistics --- 00:12:31.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.935 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:12:31.935 00:12:31.935 --- 10.0.0.1 ping statistics --- 00:12:31.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.935 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.935 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1438487 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1438487 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1438487 ']' 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.936 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 [2024-12-09 05:05:45.314165] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:31.936 [2024-12-09 05:05:45.314291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.936 [2024-12-09 05:05:45.482635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.936 [2024-12-09 05:05:45.614506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.936 [2024-12-09 05:05:45.614575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.936 [2024-12-09 05:05:45.614589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.936 [2024-12-09 05:05:45.614603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.936 [2024-12-09 05:05:45.614613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.936 [2024-12-09 05:05:45.617563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.936 [2024-12-09 05:05:45.617702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.936 [2024-12-09 05:05:45.617716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.936 [2024-12-09 05:05:45.617718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.197 [2024-12-09 05:05:46.156617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.197 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.197 [2024-12-09 05:05:46.190643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.458 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.720 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.981 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.982 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.243 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:33.505 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.766 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.026 05:05:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.286 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.547 rmmod nvme_tcp 00:12:34.547 rmmod nvme_fabrics 00:12:34.547 rmmod nvme_keyring 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1438487 ']' 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1438487 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1438487 ']' 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1438487 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438487 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438487' 00:12:34.547 killing process with pid 1438487 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1438487 00:12:34.547 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1438487 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.490 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.035 00:12:38.035 real 0m14.081s 00:12:38.035 user 0m17.129s 00:12:38.035 sys 0m6.772s 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.035 ************************************ 00:12:38.035 END TEST nvmf_referrals 00:12:38.035 ************************************ 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.035 ************************************ 00:12:38.035 START TEST nvmf_connect_disconnect 00:12:38.035 ************************************ 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:38.035 * Looking for test storage... 00:12:38.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.035 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.036 --rc genhtml_branch_coverage=1 00:12:38.036 --rc genhtml_function_coverage=1 00:12:38.036 --rc genhtml_legend=1 00:12:38.036 --rc geninfo_all_blocks=1 00:12:38.036 --rc geninfo_unexecuted_blocks=1 00:12:38.036 00:12:38.036 ' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.036 --rc genhtml_branch_coverage=1 00:12:38.036 --rc genhtml_function_coverage=1 00:12:38.036 --rc genhtml_legend=1 00:12:38.036 --rc geninfo_all_blocks=1 00:12:38.036 --rc geninfo_unexecuted_blocks=1 00:12:38.036 00:12:38.036 ' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.036 --rc genhtml_branch_coverage=1 00:12:38.036 --rc genhtml_function_coverage=1 00:12:38.036 --rc genhtml_legend=1 00:12:38.036 --rc geninfo_all_blocks=1 00:12:38.036 --rc geninfo_unexecuted_blocks=1 00:12:38.036 00:12:38.036 ' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.036 --rc genhtml_branch_coverage=1 00:12:38.036 --rc genhtml_function_coverage=1 00:12:38.036 --rc genhtml_legend=1 00:12:38.036 --rc geninfo_all_blocks=1 00:12:38.036 --rc geninfo_unexecuted_blocks=1 00:12:38.036 00:12:38.036 ' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.036 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.037 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:46.179 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:46.179 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:46.179 Found net devices under 0000:31:00.0: cvl_0_0 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:46.179 Found net devices under 0000:31:00.1: cvl_0_1 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.179 05:05:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.179 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:12:46.180 00:12:46.180 --- 10.0.0.2 ping statistics --- 00:12:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.180 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:12:46.180 00:12:46.180 --- 10.0.0.1 ping statistics --- 00:12:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.180 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1443611 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1443611 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1443611 ']' 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.180 05:05:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 [2024-12-09 05:05:59.369349] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:46.180 [2024-12-09 05:05:59.369482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.180 [2024-12-09 05:05:59.535014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.180 [2024-12-09 05:05:59.663229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.180 [2024-12-09 05:05:59.663296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.180 [2024-12-09 05:05:59.663309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.180 [2024-12-09 05:05:59.663323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.180 [2024-12-09 05:05:59.663333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.180 [2024-12-09 05:05:59.666275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.180 [2024-12-09 05:05:59.666412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.180 [2024-12-09 05:05:59.666518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.180 [2024-12-09 05:05:59.666543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.180 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.180 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:46.180 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.180 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.180 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 [2024-12-09 05:06:00.212615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.441 [2024-12-09 05:06:00.341997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:46.441 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:48.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:40.616 rmmod nvme_tcp 00:16:40.616 rmmod nvme_fabrics 00:16:40.616 rmmod nvme_keyring 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1443611 ']' 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1443611 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1443611 ']' 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1443611 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1443611 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1443611' 00:16:40.616 killing process with pid 1443611 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1443611 00:16:40.616 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1443611 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.185 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.731 00:16:43.731 real 4m5.727s 00:16:43.731 user 15m30.021s 00:16:43.731 sys 0m29.030s 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:43.731 ************************************ 00:16:43.731 END TEST nvmf_connect_disconnect 00:16:43.731 ************************************ 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.731 ************************************ 00:16:43.731 START TEST nvmf_multitarget 00:16:43.731 ************************************ 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:43.731 * Looking for test storage... 00:16:43.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:43.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.731 --rc genhtml_branch_coverage=1 00:16:43.731 --rc genhtml_function_coverage=1 00:16:43.731 --rc genhtml_legend=1 00:16:43.731 --rc geninfo_all_blocks=1 00:16:43.731 --rc geninfo_unexecuted_blocks=1 00:16:43.731 00:16:43.731 ' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:43.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.731 --rc genhtml_branch_coverage=1 00:16:43.731 --rc genhtml_function_coverage=1 00:16:43.731 --rc genhtml_legend=1 00:16:43.731 --rc geninfo_all_blocks=1 00:16:43.731 --rc geninfo_unexecuted_blocks=1 00:16:43.731 00:16:43.731 ' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:43.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.731 --rc genhtml_branch_coverage=1 00:16:43.731 --rc genhtml_function_coverage=1 00:16:43.731 --rc genhtml_legend=1 00:16:43.731 --rc geninfo_all_blocks=1 00:16:43.731 --rc geninfo_unexecuted_blocks=1 00:16:43.731 00:16:43.731 ' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:43.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.731 --rc genhtml_branch_coverage=1 00:16:43.731 --rc genhtml_function_coverage=1 00:16:43.731 --rc genhtml_legend=1 00:16:43.731 --rc geninfo_all_blocks=1 00:16:43.731 --rc geninfo_unexecuted_blocks=1 00:16:43.731 00:16:43.731 ' 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.731 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:43.732 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:51.877 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:51.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:51.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:51.878 Found net devices under 0000:31:00.0: cvl_0_0 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:51.878 Found net devices under 0000:31:00.1: cvl_0_1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:51.878 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:51.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:16:51.878 00:16:51.878 --- 10.0.0.2 ping statistics --- 00:16:51.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.878 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:51.878 00:16:51.878 --- 10.0.0.1 ping statistics --- 00:16:51.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.878 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1495736 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1495736 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1495736 ']' 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.878 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 [2024-12-09 05:10:05.204740] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:51.878 [2024-12-09 05:10:05.204887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.878 [2024-12-09 05:10:05.371000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.878 [2024-12-09 05:10:05.496580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.878 [2024-12-09 05:10:05.496649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.878 [2024-12-09 05:10:05.496665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.878 [2024-12-09 05:10:05.496679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.878 [2024-12-09 05:10:05.496689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.878 [2024-12-09 05:10:05.499735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.878 [2024-12-09 05:10:05.499898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.878 [2024-12-09 05:10:05.499966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.878 [2024-12-09 05:10:05.499991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.139 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.139 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:52.139 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.139 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.139 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:52.139 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.139 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:52.139 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.139 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:52.399 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:52.399 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:52.399 "nvmf_tgt_1" 00:16:52.399 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:52.399 "nvmf_tgt_2" 00:16:52.399 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.399 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:52.660 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:52.660 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:52.660 true 00:16:52.660 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:52.920 true 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:52.920 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:52.921 rmmod nvme_tcp 00:16:52.921 rmmod nvme_fabrics 00:16:52.921 rmmod nvme_keyring 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1495736 ']' 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1495736 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1495736 ']' 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1495736 00:16:52.921 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495736 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495736' 00:16:53.181 killing process with pid 1495736 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1495736 00:16:53.181 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1495736 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.130 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.047 00:16:56.047 real 0m12.653s 00:16:56.047 user 0m11.931s 00:16:56.047 sys 0m6.380s 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:56.047 ************************************ 00:16:56.047 END TEST nvmf_multitarget 00:16:56.047 ************************************ 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.047 05:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.047 ************************************ 00:16:56.047 START TEST nvmf_rpc 00:16:56.047 ************************************ 00:16:56.047 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:56.309 * Looking for test storage... 00:16:56.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.309 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.310 --rc genhtml_branch_coverage=1 00:16:56.310 --rc genhtml_function_coverage=1 00:16:56.310 --rc genhtml_legend=1 00:16:56.310 --rc geninfo_all_blocks=1 00:16:56.310 --rc geninfo_unexecuted_blocks=1 00:16:56.310 00:16:56.310 ' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.310 --rc genhtml_branch_coverage=1 00:16:56.310 --rc genhtml_function_coverage=1 00:16:56.310 --rc genhtml_legend=1 00:16:56.310 --rc geninfo_all_blocks=1 00:16:56.310 --rc geninfo_unexecuted_blocks=1 00:16:56.310 00:16:56.310 ' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.310 --rc genhtml_branch_coverage=1 00:16:56.310 --rc genhtml_function_coverage=1 00:16:56.310 --rc genhtml_legend=1 00:16:56.310 --rc geninfo_all_blocks=1 00:16:56.310 --rc geninfo_unexecuted_blocks=1 00:16:56.310 00:16:56.310 ' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.310 --rc genhtml_branch_coverage=1 00:16:56.310 --rc genhtml_function_coverage=1 00:16:56.310 --rc genhtml_legend=1 00:16:56.310 --rc geninfo_all_blocks=1 00:16:56.310 --rc geninfo_unexecuted_blocks=1 00:16:56.310 00:16:56.310 ' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.310 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:04.454 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:04.454 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:04.454 Found net devices under 0000:31:00.0: cvl_0_0 00:17:04.454 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:04.455 Found net devices under 0000:31:00.1: cvl_0_1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:04.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:17:04.455 00:17:04.455 --- 10.0.0.2 ping statistics --- 00:17:04.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.455 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:04.455 00:17:04.455 --- 10.0.0.1 ping statistics --- 00:17:04.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.455 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1500562 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1500562 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1500562 ']' 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.455 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.455 [2024-12-09 05:10:18.080722] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:04.455 [2024-12-09 05:10:18.080849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.455 [2024-12-09 05:10:18.215335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.455 [2024-12-09 05:10:18.320602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.455 [2024-12-09 05:10:18.320673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.455 [2024-12-09 05:10:18.320684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.455 [2024-12-09 05:10:18.320695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.455 [2024-12-09 05:10:18.320703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.455 [2024-12-09 05:10:18.323208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.455 [2024-12-09 05:10:18.323448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.456 [2024-12-09 05:10:18.323599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.456 [2024-12-09 05:10:18.323616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:05.029 "tick_rate": 2400000000, 00:17:05.029 "poll_groups": [ 00:17:05.029 { 00:17:05.029 "name": "nvmf_tgt_poll_group_000", 00:17:05.029 "admin_qpairs": 0, 00:17:05.029 "io_qpairs": 0, 00:17:05.029 "current_admin_qpairs": 0, 00:17:05.029 "current_io_qpairs": 0, 00:17:05.029 "pending_bdev_io": 0, 00:17:05.029 "completed_nvme_io": 0, 00:17:05.029 "transports": [] 00:17:05.029 }, 00:17:05.029 { 00:17:05.029 "name": "nvmf_tgt_poll_group_001", 00:17:05.029 "admin_qpairs": 0, 00:17:05.029 "io_qpairs": 0, 00:17:05.029 "current_admin_qpairs": 0, 00:17:05.029 "current_io_qpairs": 0, 00:17:05.029 "pending_bdev_io": 0, 00:17:05.029 "completed_nvme_io": 0, 00:17:05.029 "transports": [] 00:17:05.029 }, 00:17:05.029 { 00:17:05.029 "name": "nvmf_tgt_poll_group_002", 00:17:05.029 "admin_qpairs": 0, 00:17:05.029 "io_qpairs": 0, 00:17:05.029 "current_admin_qpairs": 0, 00:17:05.029 "current_io_qpairs": 0, 00:17:05.029 "pending_bdev_io": 0, 00:17:05.029 "completed_nvme_io": 0, 00:17:05.029 "transports": [] 00:17:05.029 }, 00:17:05.029 { 00:17:05.029 "name": "nvmf_tgt_poll_group_003", 00:17:05.029 "admin_qpairs": 0, 00:17:05.029 "io_qpairs": 0, 00:17:05.029 "current_admin_qpairs": 0, 00:17:05.029 "current_io_qpairs": 0, 00:17:05.029 "pending_bdev_io": 0, 00:17:05.029 "completed_nvme_io": 0, 00:17:05.029 "transports": [] 00:17:05.029 } 00:17:05.029 ] 00:17:05.029 }' 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:05.029 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.292 [2024-12-09 05:10:19.047862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:05.292 "tick_rate": 2400000000, 00:17:05.292 "poll_groups": [ 00:17:05.292 { 00:17:05.292 "name": "nvmf_tgt_poll_group_000", 00:17:05.292 "admin_qpairs": 0, 00:17:05.292 "io_qpairs": 0, 00:17:05.292 "current_admin_qpairs": 0, 00:17:05.292 "current_io_qpairs": 0, 00:17:05.292 "pending_bdev_io": 0, 00:17:05.292 "completed_nvme_io": 0, 00:17:05.292 "transports": [ 00:17:05.292 { 00:17:05.292 "trtype": "TCP" 00:17:05.292 } 00:17:05.292 ] 00:17:05.292 }, 00:17:05.292 { 00:17:05.292 "name": "nvmf_tgt_poll_group_001", 00:17:05.292 "admin_qpairs": 0, 00:17:05.292 "io_qpairs": 0, 00:17:05.292 "current_admin_qpairs": 0, 00:17:05.292 "current_io_qpairs": 0, 00:17:05.292 "pending_bdev_io": 0, 00:17:05.292 "completed_nvme_io": 0, 00:17:05.292 "transports": [ 00:17:05.292 { 00:17:05.292 "trtype": "TCP" 00:17:05.292 } 00:17:05.292 ] 00:17:05.292 }, 00:17:05.292 { 00:17:05.292 "name": "nvmf_tgt_poll_group_002", 00:17:05.292 "admin_qpairs": 0, 00:17:05.292 "io_qpairs": 0, 00:17:05.292 "current_admin_qpairs": 0, 00:17:05.292 "current_io_qpairs": 0, 00:17:05.292 "pending_bdev_io": 0, 00:17:05.292 "completed_nvme_io": 0, 00:17:05.292 "transports": [ 00:17:05.292 { 00:17:05.292 "trtype": "TCP" 00:17:05.292 } 00:17:05.292 ] 00:17:05.292 }, 00:17:05.292 { 00:17:05.292 "name": "nvmf_tgt_poll_group_003", 00:17:05.292 "admin_qpairs": 0, 00:17:05.292 "io_qpairs": 0, 00:17:05.292 "current_admin_qpairs": 0, 00:17:05.292 "current_io_qpairs": 0, 00:17:05.292 "pending_bdev_io": 0, 00:17:05.292 "completed_nvme_io": 0, 00:17:05.292 "transports": [ 00:17:05.292 { 00:17:05.292 "trtype": "TCP" 00:17:05.292 } 00:17:05.292 ] 00:17:05.292 } 00:17:05.292 ] 00:17:05.292 }' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.292 Malloc1 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.292 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.556 [2024-12-09 05:10:19.308282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:05.556 [2024-12-09 05:10:19.346010] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:17:05.556 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:05.556 could not add new controller: failed to write to nvme-fabrics device 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.556 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.474 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:07.474 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:07.474 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.475 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:07.475 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:09.398 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.398 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.399 [2024-12-09 05:10:23.247584] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:17:09.399 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:09.399 could not add new controller: failed to write to nvme-fabrics device 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.399 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.315 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.315 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:11.315 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.315 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:11.315 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:13.227 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 [2024-12-09 05:10:27.128887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.227 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.139 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.139 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:15.139 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.139 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:15.139 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 [2024-12-09 05:10:30.988691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.961 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.961 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.961 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.961 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.961 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 [2024-12-09 05:10:34.838970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.872 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.782 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.782 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.782 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.782 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:22.782 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.691 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.692 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.692 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.692 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 [2024-12-09 05:10:38.754061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.952 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.343 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.343 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.343 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.343 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.343 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 [2024-12-09 05:10:42.582304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:30.269 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.269 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.269 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.269 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:30.269 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:32.250 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 [2024-12-09 05:10:46.420780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 [2024-12-09 05:10:46.488924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 [2024-12-09 05:10:46.557081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 [2024-12-09 05:10:46.625261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 [2024-12-09 05:10:46.693485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.814 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:32.814 "tick_rate": 2400000000, 00:17:32.814 "poll_groups": [ 00:17:32.814 { 00:17:32.814 "name": "nvmf_tgt_poll_group_000", 00:17:32.814 "admin_qpairs": 0, 00:17:32.814 "io_qpairs": 224, 00:17:32.814 "current_admin_qpairs": 0, 00:17:32.814 "current_io_qpairs": 0, 00:17:32.814 "pending_bdev_io": 0, 00:17:32.814 "completed_nvme_io": 225, 00:17:32.814 "transports": [ 00:17:32.814 { 00:17:32.814 "trtype": "TCP" 00:17:32.814 } 00:17:32.814 ] 00:17:32.814 }, 00:17:32.814 { 00:17:32.814 "name": "nvmf_tgt_poll_group_001", 00:17:32.814 "admin_qpairs": 1, 00:17:32.814 "io_qpairs": 223, 00:17:32.814 "current_admin_qpairs": 0, 00:17:32.814 "current_io_qpairs": 0, 00:17:32.814 "pending_bdev_io": 0, 00:17:32.814 "completed_nvme_io": 277, 00:17:32.814 "transports": [ 00:17:32.814 { 00:17:32.814 "trtype": "TCP" 00:17:32.814 } 00:17:32.814 ] 00:17:32.814 }, 00:17:32.814 { 00:17:32.814 "name": "nvmf_tgt_poll_group_002", 00:17:32.814 "admin_qpairs": 6, 00:17:32.814 "io_qpairs": 218, 00:17:32.814 "current_admin_qpairs": 0, 00:17:32.814 "current_io_qpairs": 0, 00:17:32.814 "pending_bdev_io": 0, 00:17:32.814 "completed_nvme_io": 513, 00:17:32.815 "transports": [ 00:17:32.815 { 00:17:32.815 "trtype": "TCP" 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "name": "nvmf_tgt_poll_group_003", 00:17:32.815 "admin_qpairs": 0, 00:17:32.815 "io_qpairs": 224, 00:17:32.815 "current_admin_qpairs": 0, 00:17:32.815 "current_io_qpairs": 0, 00:17:32.815 "pending_bdev_io": 0, 00:17:32.815 "completed_nvme_io": 224, 00:17:32.815 "transports": [ 00:17:32.815 { 00:17:32.815 "trtype": "TCP" 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 }' 00:17:32.815 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:32.815 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:32.815 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:32.815 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.092 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.093 rmmod nvme_tcp 00:17:33.093 rmmod nvme_fabrics 00:17:33.093 rmmod nvme_keyring 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1500562 ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1500562 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1500562 ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1500562 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500562 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500562' 00:17:33.093 killing process with pid 1500562 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1500562 00:17:33.093 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1500562 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.704 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.344 00:17:36.344 real 0m39.725s 00:17:36.344 user 1m58.590s 00:17:36.344 sys 0m8.314s 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.344 ************************************ 00:17:36.344 END TEST nvmf_rpc 00:17:36.344 ************************************ 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.344 ************************************ 00:17:36.344 START TEST nvmf_invalid 00:17:36.344 ************************************ 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.344 * Looking for test storage... 00:17:36.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.344 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.344 --rc genhtml_branch_coverage=1 00:17:36.344 --rc genhtml_function_coverage=1 00:17:36.344 --rc genhtml_legend=1 00:17:36.344 --rc geninfo_all_blocks=1 00:17:36.344 --rc geninfo_unexecuted_blocks=1 00:17:36.344 00:17:36.344 ' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.344 --rc genhtml_branch_coverage=1 00:17:36.344 --rc genhtml_function_coverage=1 00:17:36.344 --rc genhtml_legend=1 00:17:36.344 --rc geninfo_all_blocks=1 00:17:36.344 --rc geninfo_unexecuted_blocks=1 00:17:36.344 00:17:36.344 ' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.344 --rc genhtml_branch_coverage=1 00:17:36.344 --rc genhtml_function_coverage=1 00:17:36.344 --rc genhtml_legend=1 00:17:36.344 --rc geninfo_all_blocks=1 00:17:36.344 --rc geninfo_unexecuted_blocks=1 00:17:36.344 00:17:36.344 ' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.344 --rc genhtml_branch_coverage=1 00:17:36.344 --rc genhtml_function_coverage=1 00:17:36.344 --rc genhtml_legend=1 00:17:36.344 --rc geninfo_all_blocks=1 00:17:36.344 --rc geninfo_unexecuted_blocks=1 00:17:36.344 00:17:36.344 ' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.344 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.345 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:44.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:44.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:44.554 Found net devices under 0000:31:00.0: cvl_0_0 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:44.554 Found net devices under 0000:31:00.1: cvl_0_1 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.554 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:17:44.555 00:17:44.555 --- 10.0.0.2 ping statistics --- 00:17:44.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.555 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:17:44.555 00:17:44.555 --- 10.0.0.1 ping statistics --- 00:17:44.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.555 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1510468 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1510468 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1510468 ']' 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.555 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:44.555 [2024-12-09 05:10:57.707574] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:44.555 [2024-12-09 05:10:57.707703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.555 [2024-12-09 05:10:57.873952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.555 [2024-12-09 05:10:58.004936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.555 [2024-12-09 05:10:58.004999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.555 [2024-12-09 05:10:58.005018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.555 [2024-12-09 05:10:58.005031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.555 [2024-12-09 05:10:58.005040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.555 [2024-12-09 05:10:58.007989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.555 [2024-12-09 05:10:58.008124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.555 [2024-12-09 05:10:58.008229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.555 [2024-12-09 05:10:58.008254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:44.555 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21869 00:17:44.816 [2024-12-09 05:10:58.711003] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:44.816 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:44.816 { 00:17:44.816 "nqn": "nqn.2016-06.io.spdk:cnode21869", 00:17:44.816 "tgt_name": "foobar", 00:17:44.816 "method": "nvmf_create_subsystem", 00:17:44.816 "req_id": 1 00:17:44.816 } 00:17:44.816 Got JSON-RPC error response 00:17:44.816 response: 00:17:44.816 { 00:17:44.816 "code": -32603, 00:17:44.816 "message": "Unable to find target foobar" 00:17:44.816 }' 00:17:44.816 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:44.816 { 00:17:44.816 "nqn": "nqn.2016-06.io.spdk:cnode21869", 00:17:44.816 "tgt_name": "foobar", 00:17:44.816 "method": "nvmf_create_subsystem", 00:17:44.816 "req_id": 1 00:17:44.816 } 00:17:44.816 Got JSON-RPC error response 00:17:44.816 response: 00:17:44.816 { 00:17:44.816 "code": -32603, 00:17:44.816 "message": "Unable to find target foobar" 00:17:44.816 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:44.816 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:44.816 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19247 00:17:45.077 [2024-12-09 05:10:58.919797] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19247: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:45.077 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:45.077 { 00:17:45.077 "nqn": "nqn.2016-06.io.spdk:cnode19247", 00:17:45.077 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:45.077 "method": "nvmf_create_subsystem", 00:17:45.077 "req_id": 1 00:17:45.077 } 00:17:45.077 Got JSON-RPC error response 00:17:45.077 response: 00:17:45.077 { 00:17:45.077 "code": -32602, 00:17:45.077 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:45.077 }' 00:17:45.077 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:45.077 { 00:17:45.077 "nqn": "nqn.2016-06.io.spdk:cnode19247", 00:17:45.077 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:45.077 "method": "nvmf_create_subsystem", 00:17:45.077 "req_id": 1 00:17:45.077 } 00:17:45.077 Got JSON-RPC error response 00:17:45.077 response: 00:17:45.077 { 00:17:45.077 "code": -32602, 00:17:45.077 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:45.077 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:45.077 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:45.077 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25705 00:17:45.339 [2024-12-09 05:10:59.124604] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25705: invalid model number 'SPDK_Controller' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:45.340 { 00:17:45.340 "nqn": "nqn.2016-06.io.spdk:cnode25705", 00:17:45.340 "model_number": "SPDK_Controller\u001f", 00:17:45.340 "method": "nvmf_create_subsystem", 00:17:45.340 "req_id": 1 00:17:45.340 } 00:17:45.340 Got JSON-RPC error response 00:17:45.340 response: 00:17:45.340 { 00:17:45.340 "code": -32602, 00:17:45.340 "message": "Invalid MN SPDK_Controller\u001f" 00:17:45.340 }' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:45.340 { 00:17:45.340 "nqn": "nqn.2016-06.io.spdk:cnode25705", 00:17:45.340 "model_number": "SPDK_Controller\u001f", 00:17:45.340 "method": "nvmf_create_subsystem", 00:17:45.340 "req_id": 1 00:17:45.340 } 00:17:45.340 Got JSON-RPC error response 00:17:45.340 response: 00:17:45.340 { 00:17:45.340 "code": -32602, 00:17:45.340 "message": "Invalid MN SPDK_Controller\u001f" 00:17:45.340 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.340 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.341 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U7FNkun~Y`1hx'\''T85M(u' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U7FNkun~Y`1hx'\''T85M(u' nqn.2016-06.io.spdk:cnode2340 00:17:45.604 [2024-12-09 05:10:59.510121] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2340: invalid serial number 'U7FNkun~Y`1hx'T85M(u' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:45.604 { 00:17:45.604 "nqn": "nqn.2016-06.io.spdk:cnode2340", 00:17:45.604 "serial_number": "U7FNk\u007fun~Y`1hx'\''T85M(u", 00:17:45.604 "method": "nvmf_create_subsystem", 00:17:45.604 "req_id": 1 00:17:45.604 } 00:17:45.604 Got JSON-RPC error response 00:17:45.604 response: 00:17:45.604 { 00:17:45.604 "code": -32602, 00:17:45.604 "message": "Invalid SN U7FNk\u007fun~Y`1hx'\''T85M(u" 00:17:45.604 }' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:45.604 { 00:17:45.604 "nqn": "nqn.2016-06.io.spdk:cnode2340", 00:17:45.604 "serial_number": "U7FNk\u007fun~Y`1hx'T85M(u", 00:17:45.604 "method": "nvmf_create_subsystem", 00:17:45.604 "req_id": 1 00:17:45.604 } 00:17:45.604 Got JSON-RPC error response 00:17:45.604 response: 00:17:45.604 { 00:17:45.604 "code": -32602, 00:17:45.604 "message": "Invalid SN U7FNk\u007fun~Y`1hx'T85M(u" 00:17:45.604 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.604 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:45.867 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:45.868 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:45.869 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ A == \- ]] 00:17:46.131 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'AQhhih#1CQ F.$ih9XC*]Q#O /dev/null' 00:17:48.538 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.089 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:51.089 00:17:51.089 real 0m14.692s 00:17:51.089 user 0m22.045s 00:17:51.089 sys 0m6.896s 00:17:51.089 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.089 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 ************************************ 00:17:51.090 END TEST nvmf_invalid 00:17:51.090 ************************************ 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.090 ************************************ 00:17:51.090 START TEST nvmf_connect_stress 00:17:51.090 ************************************ 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:51.090 * Looking for test storage... 00:17:51.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.090 --rc genhtml_branch_coverage=1 00:17:51.090 --rc genhtml_function_coverage=1 00:17:51.090 --rc genhtml_legend=1 00:17:51.090 --rc geninfo_all_blocks=1 00:17:51.090 --rc geninfo_unexecuted_blocks=1 00:17:51.090 00:17:51.090 ' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.090 --rc genhtml_branch_coverage=1 00:17:51.090 --rc genhtml_function_coverage=1 00:17:51.090 --rc genhtml_legend=1 00:17:51.090 --rc geninfo_all_blocks=1 00:17:51.090 --rc geninfo_unexecuted_blocks=1 00:17:51.090 00:17:51.090 ' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.090 --rc genhtml_branch_coverage=1 00:17:51.090 --rc genhtml_function_coverage=1 00:17:51.090 --rc genhtml_legend=1 00:17:51.090 --rc geninfo_all_blocks=1 00:17:51.090 --rc geninfo_unexecuted_blocks=1 00:17:51.090 00:17:51.090 ' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.090 --rc genhtml_branch_coverage=1 00:17:51.090 --rc genhtml_function_coverage=1 00:17:51.090 --rc genhtml_legend=1 00:17:51.090 --rc geninfo_all_blocks=1 00:17:51.090 --rc geninfo_unexecuted_blocks=1 00:17:51.090 00:17:51.090 ' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.090 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:51.091 05:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:59.239 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:59.239 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:59.239 Found net devices under 0000:31:00.0: cvl_0_0 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:59.239 Found net devices under 0000:31:00.1: cvl_0_1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.239 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:17:59.240 00:17:59.240 --- 10.0.0.2 ping statistics --- 00:17:59.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.240 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:17:59.240 00:17:59.240 --- 10.0.0.1 ping statistics --- 00:17:59.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.240 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1515778 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1515778 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1515778 ']' 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.240 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.240 [2024-12-09 05:11:12.519204] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:59.240 [2024-12-09 05:11:12.519323] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.240 [2024-12-09 05:11:12.655941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.240 [2024-12-09 05:11:12.759353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.240 [2024-12-09 05:11:12.759422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.240 [2024-12-09 05:11:12.759433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.240 [2024-12-09 05:11:12.759444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.240 [2024-12-09 05:11:12.759452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.240 [2024-12-09 05:11:12.761847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.240 [2024-12-09 05:11:12.761973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.240 [2024-12-09 05:11:12.762151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 [2024-12-09 05:11:13.362720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 [2024-12-09 05:11:13.391152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 NULL1 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1516034 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.502 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.503 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.764 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.025 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.025 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:00.025 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.025 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.025 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.287 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:00.287 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.287 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.287 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.549 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.549 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:00.549 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.549 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.549 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.122 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.122 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:01.122 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.122 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.122 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.383 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:01.383 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.383 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.383 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.643 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.643 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:01.643 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.643 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.643 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.904 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.904 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:01.904 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.904 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.904 05:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.165 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.165 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:02.165 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.165 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.165 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.735 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.735 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:02.735 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.735 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.735 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.993 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.993 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:02.994 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.994 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.994 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.254 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.254 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:03.254 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.254 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.254 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:03.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.775 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.775 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:03.775 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.775 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.775 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:04.344 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.344 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:04.603 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.603 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.863 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.863 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:04.863 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.863 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.863 05:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.122 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.122 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:05.122 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.122 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.122 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.693 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.693 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:05.693 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.693 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.693 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.954 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:05.954 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.954 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.954 05:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.215 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.215 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:06.215 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.215 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.215 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.476 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.476 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:06.476 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.476 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.476 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.736 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.736 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:06.736 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.736 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.736 05:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.306 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.306 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:07.306 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.306 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.306 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.567 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.567 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:07.567 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.567 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.567 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.832 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.832 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:07.832 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.832 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.832 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.111 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.111 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:08.111 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.111 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.111 05:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.371 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.371 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:08.371 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.371 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.371 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.943 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.943 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:08.943 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.943 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.943 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.203 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:09.203 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.203 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.203 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.463 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:09.463 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.463 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.463 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.724 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:09.724 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.724 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.724 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.724 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1516034 00:18:09.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1516034) - No such process 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1516034 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.985 05:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.985 rmmod nvme_tcp 00:18:09.985 rmmod nvme_fabrics 00:18:10.246 rmmod nvme_keyring 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1515778 ']' 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1515778 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1515778 ']' 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1515778 00:18:10.246 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515778 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515778' 00:18:10.247 killing process with pid 1515778 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1515778 00:18:10.247 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1515778 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.817 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.358 00:18:13.358 real 0m22.134s 00:18:13.358 user 0m45.506s 00:18:13.358 sys 0m8.084s 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.358 ************************************ 00:18:13.358 END TEST nvmf_connect_stress 00:18:13.358 ************************************ 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.358 ************************************ 00:18:13.358 START TEST nvmf_fused_ordering 00:18:13.358 ************************************ 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:13.358 * Looking for test storage... 00:18:13.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.358 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.358 --rc genhtml_branch_coverage=1 00:18:13.358 --rc genhtml_function_coverage=1 00:18:13.358 --rc genhtml_legend=1 00:18:13.358 --rc geninfo_all_blocks=1 00:18:13.358 --rc geninfo_unexecuted_blocks=1 00:18:13.358 00:18:13.358 ' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.358 --rc genhtml_branch_coverage=1 00:18:13.358 --rc genhtml_function_coverage=1 00:18:13.358 --rc genhtml_legend=1 00:18:13.358 --rc geninfo_all_blocks=1 00:18:13.358 --rc geninfo_unexecuted_blocks=1 00:18:13.358 00:18:13.358 ' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.358 --rc genhtml_branch_coverage=1 00:18:13.358 --rc genhtml_function_coverage=1 00:18:13.358 --rc genhtml_legend=1 00:18:13.358 --rc geninfo_all_blocks=1 00:18:13.358 --rc geninfo_unexecuted_blocks=1 00:18:13.358 00:18:13.358 ' 00:18:13.358 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.359 --rc genhtml_branch_coverage=1 00:18:13.359 --rc genhtml_function_coverage=1 00:18:13.359 --rc genhtml_legend=1 00:18:13.359 --rc geninfo_all_blocks=1 00:18:13.359 --rc geninfo_unexecuted_blocks=1 00:18:13.359 00:18:13.359 ' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.359 05:11:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:21.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:21.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:21.505 Found net devices under 0000:31:00.0: cvl_0_0 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:21.505 Found net devices under 0000:31:00.1: cvl_0_1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.505 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:21.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:18:21.506 00:18:21.506 --- 10.0.0.2 ping statistics --- 00:18:21.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.506 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:18:21.506 00:18:21.506 --- 10.0.0.1 ping statistics --- 00:18:21.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.506 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1522437 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1522437 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1522437 ']' 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.506 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 [2024-12-09 05:11:34.597615] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:21.506 [2024-12-09 05:11:34.597738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.506 [2024-12-09 05:11:34.763854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.506 [2024-12-09 05:11:34.887802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.506 [2024-12-09 05:11:34.887876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.506 [2024-12-09 05:11:34.887890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.506 [2024-12-09 05:11:34.887903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.506 [2024-12-09 05:11:34.887916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.506 [2024-12-09 05:11:34.889367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 [2024-12-09 05:11:35.441997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 [2024-12-09 05:11:35.466364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 NULL1 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.768 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.768 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:21.768 [2024-12-09 05:11:35.560409] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:21.768 [2024-12-09 05:11:35.560494] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522610 ] 00:18:22.340 Attached to nqn.2016-06.io.spdk:cnode1 00:18:22.340 Namespace ID: 1 size: 1GB 00:18:22.340 fused_ordering(0) 00:18:22.340 fused_ordering(1) 00:18:22.340 fused_ordering(2) 00:18:22.340 fused_ordering(3) 00:18:22.340 fused_ordering(4) 00:18:22.340 fused_ordering(5) 00:18:22.340 fused_ordering(6) 00:18:22.340 fused_ordering(7) 00:18:22.340 fused_ordering(8) 00:18:22.340 fused_ordering(9) 00:18:22.340 fused_ordering(10) 00:18:22.340 fused_ordering(11) 00:18:22.340 fused_ordering(12) 00:18:22.340 fused_ordering(13) 00:18:22.340 fused_ordering(14) 00:18:22.340 fused_ordering(15) 00:18:22.340 fused_ordering(16) 00:18:22.340 fused_ordering(17) 00:18:22.340 fused_ordering(18) 00:18:22.340 fused_ordering(19) 00:18:22.340 fused_ordering(20) 00:18:22.340 fused_ordering(21) 00:18:22.340 fused_ordering(22) 00:18:22.340 fused_ordering(23) 00:18:22.340 fused_ordering(24) 00:18:22.340 fused_ordering(25) 00:18:22.340 fused_ordering(26) 00:18:22.340 fused_ordering(27) 00:18:22.340 fused_ordering(28) 00:18:22.340 fused_ordering(29) 00:18:22.340 fused_ordering(30) 00:18:22.340 fused_ordering(31) 00:18:22.340 fused_ordering(32) 00:18:22.340 fused_ordering(33) 00:18:22.340 fused_ordering(34) 00:18:22.340 fused_ordering(35) 00:18:22.340 fused_ordering(36) 00:18:22.340 fused_ordering(37) 00:18:22.340 fused_ordering(38) 00:18:22.340 fused_ordering(39) 00:18:22.340 fused_ordering(40) 00:18:22.340 fused_ordering(41) 00:18:22.340 fused_ordering(42) 00:18:22.340 fused_ordering(43) 00:18:22.340 fused_ordering(44) 00:18:22.340 fused_ordering(45) 00:18:22.340 fused_ordering(46) 00:18:22.340 fused_ordering(47) 00:18:22.340 fused_ordering(48) 00:18:22.340 fused_ordering(49) 00:18:22.340 fused_ordering(50) 00:18:22.340 fused_ordering(51) 00:18:22.340 fused_ordering(52) 00:18:22.340 fused_ordering(53) 00:18:22.340 fused_ordering(54) 00:18:22.340 fused_ordering(55) 00:18:22.340 fused_ordering(56) 00:18:22.340 fused_ordering(57) 00:18:22.340 fused_ordering(58) 00:18:22.340 fused_ordering(59) 00:18:22.340 fused_ordering(60) 00:18:22.340 fused_ordering(61) 00:18:22.340 fused_ordering(62) 00:18:22.340 fused_ordering(63) 00:18:22.340 fused_ordering(64) 00:18:22.340 fused_ordering(65) 00:18:22.340 fused_ordering(66) 00:18:22.340 fused_ordering(67) 00:18:22.340 fused_ordering(68) 00:18:22.340 fused_ordering(69) 00:18:22.340 fused_ordering(70) 00:18:22.340 fused_ordering(71) 00:18:22.340 fused_ordering(72) 00:18:22.340 fused_ordering(73) 00:18:22.340 fused_ordering(74) 00:18:22.340 fused_ordering(75) 00:18:22.340 fused_ordering(76) 00:18:22.340 fused_ordering(77) 00:18:22.340 fused_ordering(78) 00:18:22.340 fused_ordering(79) 00:18:22.340 fused_ordering(80) 00:18:22.340 fused_ordering(81) 00:18:22.340 fused_ordering(82) 00:18:22.340 fused_ordering(83) 00:18:22.340 fused_ordering(84) 00:18:22.340 fused_ordering(85) 00:18:22.340 fused_ordering(86) 00:18:22.340 fused_ordering(87) 00:18:22.340 fused_ordering(88) 00:18:22.340 fused_ordering(89) 00:18:22.340 fused_ordering(90) 00:18:22.340 fused_ordering(91) 00:18:22.340 fused_ordering(92) 00:18:22.340 fused_ordering(93) 00:18:22.340 fused_ordering(94) 00:18:22.340 fused_ordering(95) 00:18:22.340 fused_ordering(96) 00:18:22.340 fused_ordering(97) 00:18:22.340 fused_ordering(98) 00:18:22.340 fused_ordering(99) 00:18:22.340 fused_ordering(100) 00:18:22.340 fused_ordering(101) 00:18:22.340 fused_ordering(102) 00:18:22.340 fused_ordering(103) 00:18:22.340 fused_ordering(104) 00:18:22.340 fused_ordering(105) 00:18:22.340 fused_ordering(106) 00:18:22.340 fused_ordering(107) 00:18:22.340 fused_ordering(108) 00:18:22.340 fused_ordering(109) 00:18:22.340 fused_ordering(110) 00:18:22.340 fused_ordering(111) 00:18:22.340 fused_ordering(112) 00:18:22.340 fused_ordering(113) 00:18:22.340 fused_ordering(114) 00:18:22.340 fused_ordering(115) 00:18:22.340 fused_ordering(116) 00:18:22.340 fused_ordering(117) 00:18:22.340 fused_ordering(118) 00:18:22.340 fused_ordering(119) 00:18:22.340 fused_ordering(120) 00:18:22.340 fused_ordering(121) 00:18:22.340 fused_ordering(122) 00:18:22.340 fused_ordering(123) 00:18:22.340 fused_ordering(124) 00:18:22.340 fused_ordering(125) 00:18:22.340 fused_ordering(126) 00:18:22.340 fused_ordering(127) 00:18:22.340 fused_ordering(128) 00:18:22.340 fused_ordering(129) 00:18:22.340 fused_ordering(130) 00:18:22.340 fused_ordering(131) 00:18:22.340 fused_ordering(132) 00:18:22.340 fused_ordering(133) 00:18:22.340 fused_ordering(134) 00:18:22.340 fused_ordering(135) 00:18:22.340 fused_ordering(136) 00:18:22.340 fused_ordering(137) 00:18:22.340 fused_ordering(138) 00:18:22.340 fused_ordering(139) 00:18:22.340 fused_ordering(140) 00:18:22.340 fused_ordering(141) 00:18:22.340 fused_ordering(142) 00:18:22.340 fused_ordering(143) 00:18:22.340 fused_ordering(144) 00:18:22.340 fused_ordering(145) 00:18:22.340 fused_ordering(146) 00:18:22.340 fused_ordering(147) 00:18:22.340 fused_ordering(148) 00:18:22.340 fused_ordering(149) 00:18:22.340 fused_ordering(150) 00:18:22.340 fused_ordering(151) 00:18:22.340 fused_ordering(152) 00:18:22.340 fused_ordering(153) 00:18:22.340 fused_ordering(154) 00:18:22.340 fused_ordering(155) 00:18:22.340 fused_ordering(156) 00:18:22.340 fused_ordering(157) 00:18:22.340 fused_ordering(158) 00:18:22.340 fused_ordering(159) 00:18:22.340 fused_ordering(160) 00:18:22.340 fused_ordering(161) 00:18:22.340 fused_ordering(162) 00:18:22.340 fused_ordering(163) 00:18:22.340 fused_ordering(164) 00:18:22.340 fused_ordering(165) 00:18:22.340 fused_ordering(166) 00:18:22.340 fused_ordering(167) 00:18:22.340 fused_ordering(168) 00:18:22.340 fused_ordering(169) 00:18:22.340 fused_ordering(170) 00:18:22.340 fused_ordering(171) 00:18:22.341 fused_ordering(172) 00:18:22.341 fused_ordering(173) 00:18:22.341 fused_ordering(174) 00:18:22.341 fused_ordering(175) 00:18:22.341 fused_ordering(176) 00:18:22.341 fused_ordering(177) 00:18:22.341 fused_ordering(178) 00:18:22.341 fused_ordering(179) 00:18:22.341 fused_ordering(180) 00:18:22.341 fused_ordering(181) 00:18:22.341 fused_ordering(182) 00:18:22.341 fused_ordering(183) 00:18:22.341 fused_ordering(184) 00:18:22.341 fused_ordering(185) 00:18:22.341 fused_ordering(186) 00:18:22.341 fused_ordering(187) 00:18:22.341 fused_ordering(188) 00:18:22.341 fused_ordering(189) 00:18:22.341 fused_ordering(190) 00:18:22.341 fused_ordering(191) 00:18:22.341 fused_ordering(192) 00:18:22.341 fused_ordering(193) 00:18:22.341 fused_ordering(194) 00:18:22.341 fused_ordering(195) 00:18:22.341 fused_ordering(196) 00:18:22.341 fused_ordering(197) 00:18:22.341 fused_ordering(198) 00:18:22.341 fused_ordering(199) 00:18:22.341 fused_ordering(200) 00:18:22.341 fused_ordering(201) 00:18:22.341 fused_ordering(202) 00:18:22.341 fused_ordering(203) 00:18:22.341 fused_ordering(204) 00:18:22.341 fused_ordering(205) 00:18:22.913 fused_ordering(206) 00:18:22.913 fused_ordering(207) 00:18:22.913 fused_ordering(208) 00:18:22.913 fused_ordering(209) 00:18:22.913 fused_ordering(210) 00:18:22.913 fused_ordering(211) 00:18:22.913 fused_ordering(212) 00:18:22.913 fused_ordering(213) 00:18:22.913 fused_ordering(214) 00:18:22.913 fused_ordering(215) 00:18:22.913 fused_ordering(216) 00:18:22.913 fused_ordering(217) 00:18:22.913 fused_ordering(218) 00:18:22.913 fused_ordering(219) 00:18:22.913 fused_ordering(220) 00:18:22.913 fused_ordering(221) 00:18:22.913 fused_ordering(222) 00:18:22.913 fused_ordering(223) 00:18:22.913 fused_ordering(224) 00:18:22.913 fused_ordering(225) 00:18:22.914 fused_ordering(226) 00:18:22.914 fused_ordering(227) 00:18:22.914 fused_ordering(228) 00:18:22.914 fused_ordering(229) 00:18:22.914 fused_ordering(230) 00:18:22.914 fused_ordering(231) 00:18:22.914 fused_ordering(232) 00:18:22.914 fused_ordering(233) 00:18:22.914 fused_ordering(234) 00:18:22.914 fused_ordering(235) 00:18:22.914 fused_ordering(236) 00:18:22.914 fused_ordering(237) 00:18:22.914 fused_ordering(238) 00:18:22.914 fused_ordering(239) 00:18:22.914 fused_ordering(240) 00:18:22.914 fused_ordering(241) 00:18:22.914 fused_ordering(242) 00:18:22.914 fused_ordering(243) 00:18:22.914 fused_ordering(244) 00:18:22.914 fused_ordering(245) 00:18:22.914 fused_ordering(246) 00:18:22.914 fused_ordering(247) 00:18:22.914 fused_ordering(248) 00:18:22.914 fused_ordering(249) 00:18:22.914 fused_ordering(250) 00:18:22.914 fused_ordering(251) 00:18:22.914 fused_ordering(252) 00:18:22.914 fused_ordering(253) 00:18:22.914 fused_ordering(254) 00:18:22.914 fused_ordering(255) 00:18:22.914 fused_ordering(256) 00:18:22.914 fused_ordering(257) 00:18:22.914 fused_ordering(258) 00:18:22.914 fused_ordering(259) 00:18:22.914 fused_ordering(260) 00:18:22.914 fused_ordering(261) 00:18:22.914 fused_ordering(262) 00:18:22.914 fused_ordering(263) 00:18:22.914 fused_ordering(264) 00:18:22.914 fused_ordering(265) 00:18:22.914 fused_ordering(266) 00:18:22.914 fused_ordering(267) 00:18:22.914 fused_ordering(268) 00:18:22.914 fused_ordering(269) 00:18:22.914 fused_ordering(270) 00:18:22.914 fused_ordering(271) 00:18:22.914 fused_ordering(272) 00:18:22.914 fused_ordering(273) 00:18:22.914 fused_ordering(274) 00:18:22.914 fused_ordering(275) 00:18:22.914 fused_ordering(276) 00:18:22.914 fused_ordering(277) 00:18:22.914 fused_ordering(278) 00:18:22.914 fused_ordering(279) 00:18:22.914 fused_ordering(280) 00:18:22.914 fused_ordering(281) 00:18:22.914 fused_ordering(282) 00:18:22.914 fused_ordering(283) 00:18:22.914 fused_ordering(284) 00:18:22.914 fused_ordering(285) 00:18:22.914 fused_ordering(286) 00:18:22.914 fused_ordering(287) 00:18:22.914 fused_ordering(288) 00:18:22.914 fused_ordering(289) 00:18:22.914 fused_ordering(290) 00:18:22.914 fused_ordering(291) 00:18:22.914 fused_ordering(292) 00:18:22.914 fused_ordering(293) 00:18:22.914 fused_ordering(294) 00:18:22.914 fused_ordering(295) 00:18:22.914 fused_ordering(296) 00:18:22.914 fused_ordering(297) 00:18:22.914 fused_ordering(298) 00:18:22.914 fused_ordering(299) 00:18:22.914 fused_ordering(300) 00:18:22.914 fused_ordering(301) 00:18:22.914 fused_ordering(302) 00:18:22.914 fused_ordering(303) 00:18:22.914 fused_ordering(304) 00:18:22.914 fused_ordering(305) 00:18:22.914 fused_ordering(306) 00:18:22.914 fused_ordering(307) 00:18:22.914 fused_ordering(308) 00:18:22.914 fused_ordering(309) 00:18:22.914 fused_ordering(310) 00:18:22.914 fused_ordering(311) 00:18:22.914 fused_ordering(312) 00:18:22.914 fused_ordering(313) 00:18:22.914 fused_ordering(314) 00:18:22.914 fused_ordering(315) 00:18:22.914 fused_ordering(316) 00:18:22.914 fused_ordering(317) 00:18:22.914 fused_ordering(318) 00:18:22.914 fused_ordering(319) 00:18:22.914 fused_ordering(320) 00:18:22.914 fused_ordering(321) 00:18:22.914 fused_ordering(322) 00:18:22.914 fused_ordering(323) 00:18:22.914 fused_ordering(324) 00:18:22.914 fused_ordering(325) 00:18:22.914 fused_ordering(326) 00:18:22.914 fused_ordering(327) 00:18:22.914 fused_ordering(328) 00:18:22.914 fused_ordering(329) 00:18:22.914 fused_ordering(330) 00:18:22.914 fused_ordering(331) 00:18:22.914 fused_ordering(332) 00:18:22.914 fused_ordering(333) 00:18:22.914 fused_ordering(334) 00:18:22.914 fused_ordering(335) 00:18:22.914 fused_ordering(336) 00:18:22.914 fused_ordering(337) 00:18:22.914 fused_ordering(338) 00:18:22.914 fused_ordering(339) 00:18:22.914 fused_ordering(340) 00:18:22.914 fused_ordering(341) 00:18:22.914 fused_ordering(342) 00:18:22.914 fused_ordering(343) 00:18:22.914 fused_ordering(344) 00:18:22.914 fused_ordering(345) 00:18:22.914 fused_ordering(346) 00:18:22.914 fused_ordering(347) 00:18:22.914 fused_ordering(348) 00:18:22.914 fused_ordering(349) 00:18:22.914 fused_ordering(350) 00:18:22.914 fused_ordering(351) 00:18:22.914 fused_ordering(352) 00:18:22.914 fused_ordering(353) 00:18:22.914 fused_ordering(354) 00:18:22.914 fused_ordering(355) 00:18:22.914 fused_ordering(356) 00:18:22.914 fused_ordering(357) 00:18:22.914 fused_ordering(358) 00:18:22.914 fused_ordering(359) 00:18:22.914 fused_ordering(360) 00:18:22.914 fused_ordering(361) 00:18:22.914 fused_ordering(362) 00:18:22.914 fused_ordering(363) 00:18:22.914 fused_ordering(364) 00:18:22.914 fused_ordering(365) 00:18:22.914 fused_ordering(366) 00:18:22.914 fused_ordering(367) 00:18:22.914 fused_ordering(368) 00:18:22.914 fused_ordering(369) 00:18:22.914 fused_ordering(370) 00:18:22.914 fused_ordering(371) 00:18:22.914 fused_ordering(372) 00:18:22.914 fused_ordering(373) 00:18:22.914 fused_ordering(374) 00:18:22.914 fused_ordering(375) 00:18:22.914 fused_ordering(376) 00:18:22.914 fused_ordering(377) 00:18:22.914 fused_ordering(378) 00:18:22.914 fused_ordering(379) 00:18:22.914 fused_ordering(380) 00:18:22.914 fused_ordering(381) 00:18:22.914 fused_ordering(382) 00:18:22.914 fused_ordering(383) 00:18:22.914 fused_ordering(384) 00:18:22.914 fused_ordering(385) 00:18:22.914 fused_ordering(386) 00:18:22.914 fused_ordering(387) 00:18:22.914 fused_ordering(388) 00:18:22.914 fused_ordering(389) 00:18:22.914 fused_ordering(390) 00:18:22.914 fused_ordering(391) 00:18:22.914 fused_ordering(392) 00:18:22.914 fused_ordering(393) 00:18:22.914 fused_ordering(394) 00:18:22.914 fused_ordering(395) 00:18:22.914 fused_ordering(396) 00:18:22.914 fused_ordering(397) 00:18:22.914 fused_ordering(398) 00:18:22.914 fused_ordering(399) 00:18:22.914 fused_ordering(400) 00:18:22.914 fused_ordering(401) 00:18:22.914 fused_ordering(402) 00:18:22.914 fused_ordering(403) 00:18:22.914 fused_ordering(404) 00:18:22.914 fused_ordering(405) 00:18:22.914 fused_ordering(406) 00:18:22.914 fused_ordering(407) 00:18:22.914 fused_ordering(408) 00:18:22.914 fused_ordering(409) 00:18:22.914 fused_ordering(410) 00:18:23.489 fused_ordering(411) 00:18:23.489 fused_ordering(412) 00:18:23.489 fused_ordering(413) 00:18:23.489 fused_ordering(414) 00:18:23.489 fused_ordering(415) 00:18:23.489 fused_ordering(416) 00:18:23.489 fused_ordering(417) 00:18:23.489 fused_ordering(418) 00:18:23.489 fused_ordering(419) 00:18:23.489 fused_ordering(420) 00:18:23.489 fused_ordering(421) 00:18:23.489 fused_ordering(422) 00:18:23.489 fused_ordering(423) 00:18:23.489 fused_ordering(424) 00:18:23.489 fused_ordering(425) 00:18:23.489 fused_ordering(426) 00:18:23.489 fused_ordering(427) 00:18:23.489 fused_ordering(428) 00:18:23.489 fused_ordering(429) 00:18:23.489 fused_ordering(430) 00:18:23.489 fused_ordering(431) 00:18:23.489 fused_ordering(432) 00:18:23.489 fused_ordering(433) 00:18:23.489 fused_ordering(434) 00:18:23.489 fused_ordering(435) 00:18:23.489 fused_ordering(436) 00:18:23.489 fused_ordering(437) 00:18:23.489 fused_ordering(438) 00:18:23.489 fused_ordering(439) 00:18:23.489 fused_ordering(440) 00:18:23.489 fused_ordering(441) 00:18:23.489 fused_ordering(442) 00:18:23.489 fused_ordering(443) 00:18:23.489 fused_ordering(444) 00:18:23.489 fused_ordering(445) 00:18:23.489 fused_ordering(446) 00:18:23.489 fused_ordering(447) 00:18:23.489 fused_ordering(448) 00:18:23.489 fused_ordering(449) 00:18:23.489 fused_ordering(450) 00:18:23.489 fused_ordering(451) 00:18:23.489 fused_ordering(452) 00:18:23.489 fused_ordering(453) 00:18:23.489 fused_ordering(454) 00:18:23.489 fused_ordering(455) 00:18:23.489 fused_ordering(456) 00:18:23.489 fused_ordering(457) 00:18:23.489 fused_ordering(458) 00:18:23.489 fused_ordering(459) 00:18:23.489 fused_ordering(460) 00:18:23.489 fused_ordering(461) 00:18:23.489 fused_ordering(462) 00:18:23.489 fused_ordering(463) 00:18:23.489 fused_ordering(464) 00:18:23.489 fused_ordering(465) 00:18:23.489 fused_ordering(466) 00:18:23.489 fused_ordering(467) 00:18:23.489 fused_ordering(468) 00:18:23.489 fused_ordering(469) 00:18:23.489 fused_ordering(470) 00:18:23.489 fused_ordering(471) 00:18:23.489 fused_ordering(472) 00:18:23.489 fused_ordering(473) 00:18:23.489 fused_ordering(474) 00:18:23.489 fused_ordering(475) 00:18:23.489 fused_ordering(476) 00:18:23.489 fused_ordering(477) 00:18:23.489 fused_ordering(478) 00:18:23.489 fused_ordering(479) 00:18:23.489 fused_ordering(480) 00:18:23.489 fused_ordering(481) 00:18:23.489 fused_ordering(482) 00:18:23.489 fused_ordering(483) 00:18:23.489 fused_ordering(484) 00:18:23.489 fused_ordering(485) 00:18:23.489 fused_ordering(486) 00:18:23.489 fused_ordering(487) 00:18:23.489 fused_ordering(488) 00:18:23.489 fused_ordering(489) 00:18:23.489 fused_ordering(490) 00:18:23.489 fused_ordering(491) 00:18:23.489 fused_ordering(492) 00:18:23.489 fused_ordering(493) 00:18:23.489 fused_ordering(494) 00:18:23.489 fused_ordering(495) 00:18:23.489 fused_ordering(496) 00:18:23.489 fused_ordering(497) 00:18:23.489 fused_ordering(498) 00:18:23.489 fused_ordering(499) 00:18:23.489 fused_ordering(500) 00:18:23.489 fused_ordering(501) 00:18:23.489 fused_ordering(502) 00:18:23.489 fused_ordering(503) 00:18:23.489 fused_ordering(504) 00:18:23.489 fused_ordering(505) 00:18:23.489 fused_ordering(506) 00:18:23.489 fused_ordering(507) 00:18:23.489 fused_ordering(508) 00:18:23.489 fused_ordering(509) 00:18:23.489 fused_ordering(510) 00:18:23.489 fused_ordering(511) 00:18:23.489 fused_ordering(512) 00:18:23.489 fused_ordering(513) 00:18:23.489 fused_ordering(514) 00:18:23.489 fused_ordering(515) 00:18:23.489 fused_ordering(516) 00:18:23.489 fused_ordering(517) 00:18:23.489 fused_ordering(518) 00:18:23.489 fused_ordering(519) 00:18:23.489 fused_ordering(520) 00:18:23.489 fused_ordering(521) 00:18:23.489 fused_ordering(522) 00:18:23.489 fused_ordering(523) 00:18:23.489 fused_ordering(524) 00:18:23.489 fused_ordering(525) 00:18:23.489 fused_ordering(526) 00:18:23.489 fused_ordering(527) 00:18:23.489 fused_ordering(528) 00:18:23.489 fused_ordering(529) 00:18:23.489 fused_ordering(530) 00:18:23.489 fused_ordering(531) 00:18:23.489 fused_ordering(532) 00:18:23.489 fused_ordering(533) 00:18:23.489 fused_ordering(534) 00:18:23.489 fused_ordering(535) 00:18:23.489 fused_ordering(536) 00:18:23.489 fused_ordering(537) 00:18:23.489 fused_ordering(538) 00:18:23.489 fused_ordering(539) 00:18:23.489 fused_ordering(540) 00:18:23.489 fused_ordering(541) 00:18:23.489 fused_ordering(542) 00:18:23.489 fused_ordering(543) 00:18:23.489 fused_ordering(544) 00:18:23.489 fused_ordering(545) 00:18:23.489 fused_ordering(546) 00:18:23.489 fused_ordering(547) 00:18:23.489 fused_ordering(548) 00:18:23.489 fused_ordering(549) 00:18:23.489 fused_ordering(550) 00:18:23.489 fused_ordering(551) 00:18:23.489 fused_ordering(552) 00:18:23.489 fused_ordering(553) 00:18:23.489 fused_ordering(554) 00:18:23.489 fused_ordering(555) 00:18:23.489 fused_ordering(556) 00:18:23.489 fused_ordering(557) 00:18:23.490 fused_ordering(558) 00:18:23.490 fused_ordering(559) 00:18:23.490 fused_ordering(560) 00:18:23.490 fused_ordering(561) 00:18:23.490 fused_ordering(562) 00:18:23.490 fused_ordering(563) 00:18:23.490 fused_ordering(564) 00:18:23.490 fused_ordering(565) 00:18:23.490 fused_ordering(566) 00:18:23.490 fused_ordering(567) 00:18:23.490 fused_ordering(568) 00:18:23.490 fused_ordering(569) 00:18:23.490 fused_ordering(570) 00:18:23.490 fused_ordering(571) 00:18:23.490 fused_ordering(572) 00:18:23.490 fused_ordering(573) 00:18:23.490 fused_ordering(574) 00:18:23.490 fused_ordering(575) 00:18:23.490 fused_ordering(576) 00:18:23.490 fused_ordering(577) 00:18:23.490 fused_ordering(578) 00:18:23.490 fused_ordering(579) 00:18:23.490 fused_ordering(580) 00:18:23.490 fused_ordering(581) 00:18:23.490 fused_ordering(582) 00:18:23.490 fused_ordering(583) 00:18:23.490 fused_ordering(584) 00:18:23.490 fused_ordering(585) 00:18:23.490 fused_ordering(586) 00:18:23.490 fused_ordering(587) 00:18:23.490 fused_ordering(588) 00:18:23.490 fused_ordering(589) 00:18:23.490 fused_ordering(590) 00:18:23.490 fused_ordering(591) 00:18:23.490 fused_ordering(592) 00:18:23.490 fused_ordering(593) 00:18:23.490 fused_ordering(594) 00:18:23.490 fused_ordering(595) 00:18:23.490 fused_ordering(596) 00:18:23.490 fused_ordering(597) 00:18:23.490 fused_ordering(598) 00:18:23.490 fused_ordering(599) 00:18:23.490 fused_ordering(600) 00:18:23.490 fused_ordering(601) 00:18:23.490 fused_ordering(602) 00:18:23.490 fused_ordering(603) 00:18:23.490 fused_ordering(604) 00:18:23.490 fused_ordering(605) 00:18:23.490 fused_ordering(606) 00:18:23.490 fused_ordering(607) 00:18:23.490 fused_ordering(608) 00:18:23.490 fused_ordering(609) 00:18:23.490 fused_ordering(610) 00:18:23.490 fused_ordering(611) 00:18:23.490 fused_ordering(612) 00:18:23.490 fused_ordering(613) 00:18:23.490 fused_ordering(614) 00:18:23.490 fused_ordering(615) 00:18:24.063 fused_ordering(616) 00:18:24.063 fused_ordering(617) 00:18:24.063 fused_ordering(618) 00:18:24.063 fused_ordering(619) 00:18:24.063 fused_ordering(620) 00:18:24.063 fused_ordering(621) 00:18:24.063 fused_ordering(622) 00:18:24.063 fused_ordering(623) 00:18:24.063 fused_ordering(624) 00:18:24.063 fused_ordering(625) 00:18:24.063 fused_ordering(626) 00:18:24.063 fused_ordering(627) 00:18:24.063 fused_ordering(628) 00:18:24.063 fused_ordering(629) 00:18:24.063 fused_ordering(630) 00:18:24.063 fused_ordering(631) 00:18:24.063 fused_ordering(632) 00:18:24.063 fused_ordering(633) 00:18:24.063 fused_ordering(634) 00:18:24.063 fused_ordering(635) 00:18:24.063 fused_ordering(636) 00:18:24.063 fused_ordering(637) 00:18:24.063 fused_ordering(638) 00:18:24.063 fused_ordering(639) 00:18:24.063 fused_ordering(640) 00:18:24.063 fused_ordering(641) 00:18:24.063 fused_ordering(642) 00:18:24.064 fused_ordering(643) 00:18:24.064 fused_ordering(644) 00:18:24.064 fused_ordering(645) 00:18:24.064 fused_ordering(646) 00:18:24.064 fused_ordering(647) 00:18:24.064 fused_ordering(648) 00:18:24.064 fused_ordering(649) 00:18:24.064 fused_ordering(650) 00:18:24.064 fused_ordering(651) 00:18:24.064 fused_ordering(652) 00:18:24.064 fused_ordering(653) 00:18:24.064 fused_ordering(654) 00:18:24.064 fused_ordering(655) 00:18:24.064 fused_ordering(656) 00:18:24.064 fused_ordering(657) 00:18:24.064 fused_ordering(658) 00:18:24.064 fused_ordering(659) 00:18:24.064 fused_ordering(660) 00:18:24.064 fused_ordering(661) 00:18:24.064 fused_ordering(662) 00:18:24.064 fused_ordering(663) 00:18:24.064 fused_ordering(664) 00:18:24.064 fused_ordering(665) 00:18:24.064 fused_ordering(666) 00:18:24.064 fused_ordering(667) 00:18:24.064 fused_ordering(668) 00:18:24.064 fused_ordering(669) 00:18:24.064 fused_ordering(670) 00:18:24.064 fused_ordering(671) 00:18:24.064 fused_ordering(672) 00:18:24.064 fused_ordering(673) 00:18:24.064 fused_ordering(674) 00:18:24.064 fused_ordering(675) 00:18:24.064 fused_ordering(676) 00:18:24.064 fused_ordering(677) 00:18:24.064 fused_ordering(678) 00:18:24.064 fused_ordering(679) 00:18:24.064 fused_ordering(680) 00:18:24.064 fused_ordering(681) 00:18:24.064 fused_ordering(682) 00:18:24.064 fused_ordering(683) 00:18:24.064 fused_ordering(684) 00:18:24.064 fused_ordering(685) 00:18:24.064 fused_ordering(686) 00:18:24.064 fused_ordering(687) 00:18:24.064 fused_ordering(688) 00:18:24.064 fused_ordering(689) 00:18:24.064 fused_ordering(690) 00:18:24.064 fused_ordering(691) 00:18:24.064 fused_ordering(692) 00:18:24.064 fused_ordering(693) 00:18:24.064 fused_ordering(694) 00:18:24.064 fused_ordering(695) 00:18:24.064 fused_ordering(696) 00:18:24.064 fused_ordering(697) 00:18:24.064 fused_ordering(698) 00:18:24.064 fused_ordering(699) 00:18:24.064 fused_ordering(700) 00:18:24.064 fused_ordering(701) 00:18:24.064 fused_ordering(702) 00:18:24.064 fused_ordering(703) 00:18:24.064 fused_ordering(704) 00:18:24.064 fused_ordering(705) 00:18:24.064 fused_ordering(706) 00:18:24.064 fused_ordering(707) 00:18:24.064 fused_ordering(708) 00:18:24.064 fused_ordering(709) 00:18:24.064 fused_ordering(710) 00:18:24.064 fused_ordering(711) 00:18:24.064 fused_ordering(712) 00:18:24.064 fused_ordering(713) 00:18:24.064 fused_ordering(714) 00:18:24.064 fused_ordering(715) 00:18:24.064 fused_ordering(716) 00:18:24.064 fused_ordering(717) 00:18:24.064 fused_ordering(718) 00:18:24.064 fused_ordering(719) 00:18:24.064 fused_ordering(720) 00:18:24.064 fused_ordering(721) 00:18:24.064 fused_ordering(722) 00:18:24.064 fused_ordering(723) 00:18:24.064 fused_ordering(724) 00:18:24.064 fused_ordering(725) 00:18:24.064 fused_ordering(726) 00:18:24.064 fused_ordering(727) 00:18:24.064 fused_ordering(728) 00:18:24.064 fused_ordering(729) 00:18:24.064 fused_ordering(730) 00:18:24.064 fused_ordering(731) 00:18:24.064 fused_ordering(732) 00:18:24.064 fused_ordering(733) 00:18:24.064 fused_ordering(734) 00:18:24.064 fused_ordering(735) 00:18:24.064 fused_ordering(736) 00:18:24.064 fused_ordering(737) 00:18:24.064 fused_ordering(738) 00:18:24.064 fused_ordering(739) 00:18:24.064 fused_ordering(740) 00:18:24.064 fused_ordering(741) 00:18:24.064 fused_ordering(742) 00:18:24.064 fused_ordering(743) 00:18:24.064 fused_ordering(744) 00:18:24.064 fused_ordering(745) 00:18:24.064 fused_ordering(746) 00:18:24.064 fused_ordering(747) 00:18:24.064 fused_ordering(748) 00:18:24.064 fused_ordering(749) 00:18:24.064 fused_ordering(750) 00:18:24.064 fused_ordering(751) 00:18:24.064 fused_ordering(752) 00:18:24.064 fused_ordering(753) 00:18:24.064 fused_ordering(754) 00:18:24.064 fused_ordering(755) 00:18:24.064 fused_ordering(756) 00:18:24.064 fused_ordering(757) 00:18:24.064 fused_ordering(758) 00:18:24.064 fused_ordering(759) 00:18:24.064 fused_ordering(760) 00:18:24.064 fused_ordering(761) 00:18:24.064 fused_ordering(762) 00:18:24.064 fused_ordering(763) 00:18:24.064 fused_ordering(764) 00:18:24.064 fused_ordering(765) 00:18:24.064 fused_ordering(766) 00:18:24.064 fused_ordering(767) 00:18:24.064 fused_ordering(768) 00:18:24.064 fused_ordering(769) 00:18:24.064 fused_ordering(770) 00:18:24.064 fused_ordering(771) 00:18:24.064 fused_ordering(772) 00:18:24.064 fused_ordering(773) 00:18:24.064 fused_ordering(774) 00:18:24.064 fused_ordering(775) 00:18:24.064 fused_ordering(776) 00:18:24.064 fused_ordering(777) 00:18:24.064 fused_ordering(778) 00:18:24.064 fused_ordering(779) 00:18:24.064 fused_ordering(780) 00:18:24.064 fused_ordering(781) 00:18:24.064 fused_ordering(782) 00:18:24.064 fused_ordering(783) 00:18:24.064 fused_ordering(784) 00:18:24.064 fused_ordering(785) 00:18:24.064 fused_ordering(786) 00:18:24.064 fused_ordering(787) 00:18:24.064 fused_ordering(788) 00:18:24.064 fused_ordering(789) 00:18:24.064 fused_ordering(790) 00:18:24.064 fused_ordering(791) 00:18:24.064 fused_ordering(792) 00:18:24.064 fused_ordering(793) 00:18:24.064 fused_ordering(794) 00:18:24.064 fused_ordering(795) 00:18:24.064 fused_ordering(796) 00:18:24.064 fused_ordering(797) 00:18:24.064 fused_ordering(798) 00:18:24.064 fused_ordering(799) 00:18:24.064 fused_ordering(800) 00:18:24.064 fused_ordering(801) 00:18:24.064 fused_ordering(802) 00:18:24.064 fused_ordering(803) 00:18:24.064 fused_ordering(804) 00:18:24.064 fused_ordering(805) 00:18:24.064 fused_ordering(806) 00:18:24.064 fused_ordering(807) 00:18:24.064 fused_ordering(808) 00:18:24.064 fused_ordering(809) 00:18:24.064 fused_ordering(810) 00:18:24.064 fused_ordering(811) 00:18:24.064 fused_ordering(812) 00:18:24.064 fused_ordering(813) 00:18:24.064 fused_ordering(814) 00:18:24.064 fused_ordering(815) 00:18:24.064 fused_ordering(816) 00:18:24.064 fused_ordering(817) 00:18:24.064 fused_ordering(818) 00:18:24.064 fused_ordering(819) 00:18:24.064 fused_ordering(820) 00:18:24.638 fused_ordering(821) 00:18:24.638 fused_ordering(822) 00:18:24.638 fused_ordering(823) 00:18:24.638 fused_ordering(824) 00:18:24.638 fused_ordering(825) 00:18:24.638 fused_ordering(826) 00:18:24.638 fused_ordering(827) 00:18:24.638 fused_ordering(828) 00:18:24.638 fused_ordering(829) 00:18:24.638 fused_ordering(830) 00:18:24.638 fused_ordering(831) 00:18:24.638 fused_ordering(832) 00:18:24.638 fused_ordering(833) 00:18:24.638 fused_ordering(834) 00:18:24.638 fused_ordering(835) 00:18:24.638 fused_ordering(836) 00:18:24.638 fused_ordering(837) 00:18:24.638 fused_ordering(838) 00:18:24.638 fused_ordering(839) 00:18:24.638 fused_ordering(840) 00:18:24.638 fused_ordering(841) 00:18:24.638 fused_ordering(842) 00:18:24.638 fused_ordering(843) 00:18:24.638 fused_ordering(844) 00:18:24.638 fused_ordering(845) 00:18:24.638 fused_ordering(846) 00:18:24.638 fused_ordering(847) 00:18:24.638 fused_ordering(848) 00:18:24.638 fused_ordering(849) 00:18:24.638 fused_ordering(850) 00:18:24.638 fused_ordering(851) 00:18:24.638 fused_ordering(852) 00:18:24.638 fused_ordering(853) 00:18:24.638 fused_ordering(854) 00:18:24.638 fused_ordering(855) 00:18:24.638 fused_ordering(856) 00:18:24.638 fused_ordering(857) 00:18:24.638 fused_ordering(858) 00:18:24.638 fused_ordering(859) 00:18:24.638 fused_ordering(860) 00:18:24.638 fused_ordering(861) 00:18:24.638 fused_ordering(862) 00:18:24.638 fused_ordering(863) 00:18:24.638 fused_ordering(864) 00:18:24.638 fused_ordering(865) 00:18:24.638 fused_ordering(866) 00:18:24.638 fused_ordering(867) 00:18:24.638 fused_ordering(868) 00:18:24.638 fused_ordering(869) 00:18:24.638 fused_ordering(870) 00:18:24.638 fused_ordering(871) 00:18:24.638 fused_ordering(872) 00:18:24.638 fused_ordering(873) 00:18:24.638 fused_ordering(874) 00:18:24.638 fused_ordering(875) 00:18:24.638 fused_ordering(876) 00:18:24.638 fused_ordering(877) 00:18:24.638 fused_ordering(878) 00:18:24.638 fused_ordering(879) 00:18:24.638 fused_ordering(880) 00:18:24.638 fused_ordering(881) 00:18:24.638 fused_ordering(882) 00:18:24.638 fused_ordering(883) 00:18:24.638 fused_ordering(884) 00:18:24.638 fused_ordering(885) 00:18:24.638 fused_ordering(886) 00:18:24.638 fused_ordering(887) 00:18:24.638 fused_ordering(888) 00:18:24.638 fused_ordering(889) 00:18:24.638 fused_ordering(890) 00:18:24.638 fused_ordering(891) 00:18:24.638 fused_ordering(892) 00:18:24.638 fused_ordering(893) 00:18:24.638 fused_ordering(894) 00:18:24.638 fused_ordering(895) 00:18:24.638 fused_ordering(896) 00:18:24.638 fused_ordering(897) 00:18:24.638 fused_ordering(898) 00:18:24.638 fused_ordering(899) 00:18:24.638 fused_ordering(900) 00:18:24.638 fused_ordering(901) 00:18:24.638 fused_ordering(902) 00:18:24.638 fused_ordering(903) 00:18:24.638 fused_ordering(904) 00:18:24.638 fused_ordering(905) 00:18:24.638 fused_ordering(906) 00:18:24.638 fused_ordering(907) 00:18:24.638 fused_ordering(908) 00:18:24.638 fused_ordering(909) 00:18:24.638 fused_ordering(910) 00:18:24.638 fused_ordering(911) 00:18:24.638 fused_ordering(912) 00:18:24.638 fused_ordering(913) 00:18:24.638 fused_ordering(914) 00:18:24.638 fused_ordering(915) 00:18:24.638 fused_ordering(916) 00:18:24.638 fused_ordering(917) 00:18:24.638 fused_ordering(918) 00:18:24.638 fused_ordering(919) 00:18:24.638 fused_ordering(920) 00:18:24.638 fused_ordering(921) 00:18:24.638 fused_ordering(922) 00:18:24.638 fused_ordering(923) 00:18:24.638 fused_ordering(924) 00:18:24.638 fused_ordering(925) 00:18:24.638 fused_ordering(926) 00:18:24.638 fused_ordering(927) 00:18:24.638 fused_ordering(928) 00:18:24.638 fused_ordering(929) 00:18:24.638 fused_ordering(930) 00:18:24.638 fused_ordering(931) 00:18:24.638 fused_ordering(932) 00:18:24.638 fused_ordering(933) 00:18:24.638 fused_ordering(934) 00:18:24.638 fused_ordering(935) 00:18:24.639 fused_ordering(936) 00:18:24.639 fused_ordering(937) 00:18:24.639 fused_ordering(938) 00:18:24.639 fused_ordering(939) 00:18:24.639 fused_ordering(940) 00:18:24.639 fused_ordering(941) 00:18:24.639 fused_ordering(942) 00:18:24.639 fused_ordering(943) 00:18:24.639 fused_ordering(944) 00:18:24.639 fused_ordering(945) 00:18:24.639 fused_ordering(946) 00:18:24.639 fused_ordering(947) 00:18:24.639 fused_ordering(948) 00:18:24.639 fused_ordering(949) 00:18:24.639 fused_ordering(950) 00:18:24.639 fused_ordering(951) 00:18:24.639 fused_ordering(952) 00:18:24.639 fused_ordering(953) 00:18:24.639 fused_ordering(954) 00:18:24.639 fused_ordering(955) 00:18:24.639 fused_ordering(956) 00:18:24.639 fused_ordering(957) 00:18:24.639 fused_ordering(958) 00:18:24.639 fused_ordering(959) 00:18:24.639 fused_ordering(960) 00:18:24.639 fused_ordering(961) 00:18:24.639 fused_ordering(962) 00:18:24.639 fused_ordering(963) 00:18:24.639 fused_ordering(964) 00:18:24.639 fused_ordering(965) 00:18:24.639 fused_ordering(966) 00:18:24.639 fused_ordering(967) 00:18:24.639 fused_ordering(968) 00:18:24.639 fused_ordering(969) 00:18:24.639 fused_ordering(970) 00:18:24.639 fused_ordering(971) 00:18:24.639 fused_ordering(972) 00:18:24.639 fused_ordering(973) 00:18:24.639 fused_ordering(974) 00:18:24.639 fused_ordering(975) 00:18:24.639 fused_ordering(976) 00:18:24.639 fused_ordering(977) 00:18:24.639 fused_ordering(978) 00:18:24.639 fused_ordering(979) 00:18:24.639 fused_ordering(980) 00:18:24.639 fused_ordering(981) 00:18:24.639 fused_ordering(982) 00:18:24.639 fused_ordering(983) 00:18:24.639 fused_ordering(984) 00:18:24.639 fused_ordering(985) 00:18:24.639 fused_ordering(986) 00:18:24.639 fused_ordering(987) 00:18:24.639 fused_ordering(988) 00:18:24.639 fused_ordering(989) 00:18:24.639 fused_ordering(990) 00:18:24.639 fused_ordering(991) 00:18:24.639 fused_ordering(992) 00:18:24.639 fused_ordering(993) 00:18:24.639 fused_ordering(994) 00:18:24.639 fused_ordering(995) 00:18:24.639 fused_ordering(996) 00:18:24.639 fused_ordering(997) 00:18:24.639 fused_ordering(998) 00:18:24.639 fused_ordering(999) 00:18:24.639 fused_ordering(1000) 00:18:24.639 fused_ordering(1001) 00:18:24.639 fused_ordering(1002) 00:18:24.639 fused_ordering(1003) 00:18:24.639 fused_ordering(1004) 00:18:24.639 fused_ordering(1005) 00:18:24.639 fused_ordering(1006) 00:18:24.639 fused_ordering(1007) 00:18:24.639 fused_ordering(1008) 00:18:24.639 fused_ordering(1009) 00:18:24.639 fused_ordering(1010) 00:18:24.639 fused_ordering(1011) 00:18:24.639 fused_ordering(1012) 00:18:24.639 fused_ordering(1013) 00:18:24.639 fused_ordering(1014) 00:18:24.639 fused_ordering(1015) 00:18:24.639 fused_ordering(1016) 00:18:24.639 fused_ordering(1017) 00:18:24.639 fused_ordering(1018) 00:18:24.639 fused_ordering(1019) 00:18:24.639 fused_ordering(1020) 00:18:24.639 fused_ordering(1021) 00:18:24.639 fused_ordering(1022) 00:18:24.639 fused_ordering(1023) 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.639 rmmod nvme_tcp 00:18:24.639 rmmod nvme_fabrics 00:18:24.639 rmmod nvme_keyring 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1522437 ']' 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1522437 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1522437 ']' 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1522437 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.639 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522437 00:18:24.901 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.901 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.901 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522437' 00:18:24.901 killing process with pid 1522437 00:18:24.901 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1522437 00:18:24.901 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1522437 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.848 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:27.764 00:18:27.764 real 0m14.760s 00:18:27.764 user 0m8.932s 00:18:27.764 sys 0m7.286s 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.764 ************************************ 00:18:27.764 END TEST nvmf_fused_ordering 00:18:27.764 ************************************ 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.764 ************************************ 00:18:27.764 START TEST nvmf_ns_masking 00:18:27.764 ************************************ 00:18:27.764 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:28.026 * Looking for test storage... 00:18:28.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.026 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:28.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.027 --rc genhtml_branch_coverage=1 00:18:28.027 --rc genhtml_function_coverage=1 00:18:28.027 --rc genhtml_legend=1 00:18:28.027 --rc geninfo_all_blocks=1 00:18:28.027 --rc geninfo_unexecuted_blocks=1 00:18:28.027 00:18:28.027 ' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:28.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.027 --rc genhtml_branch_coverage=1 00:18:28.027 --rc genhtml_function_coverage=1 00:18:28.027 --rc genhtml_legend=1 00:18:28.027 --rc geninfo_all_blocks=1 00:18:28.027 --rc geninfo_unexecuted_blocks=1 00:18:28.027 00:18:28.027 ' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:28.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.027 --rc genhtml_branch_coverage=1 00:18:28.027 --rc genhtml_function_coverage=1 00:18:28.027 --rc genhtml_legend=1 00:18:28.027 --rc geninfo_all_blocks=1 00:18:28.027 --rc geninfo_unexecuted_blocks=1 00:18:28.027 00:18:28.027 ' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:28.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.027 --rc genhtml_branch_coverage=1 00:18:28.027 --rc genhtml_function_coverage=1 00:18:28.027 --rc genhtml_legend=1 00:18:28.027 --rc geninfo_all_blocks=1 00:18:28.027 --rc geninfo_unexecuted_blocks=1 00:18:28.027 00:18:28.027 ' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=29a9a0a4-e0d3-42f8-b8ec-73e9a1698f66 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=413012f1-93dc-4fbe-9ed2-1fc9ba52a84f 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cf402ad9-d669-474e-a8ca-62b7e37f6de6 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:28.027 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.169 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:36.170 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:36.170 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:36.170 Found net devices under 0000:31:00.0: cvl_0_0 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:36.170 Found net devices under 0000:31:00.1: cvl_0_1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:36.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:18:36.170 00:18:36.170 --- 10.0.0.2 ping statistics --- 00:18:36.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.170 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:18:36.170 00:18:36.170 --- 10.0.0.1 ping statistics --- 00:18:36.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.170 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:36.170 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1527511 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1527511 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1527511 ']' 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.171 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:36.171 [2024-12-09 05:11:49.647107] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:36.171 [2024-12-09 05:11:49.647237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.171 [2024-12-09 05:11:49.814928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.171 [2024-12-09 05:11:49.936338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.171 [2024-12-09 05:11:49.936406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.171 [2024-12-09 05:11:49.936424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.171 [2024-12-09 05:11:49.936437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.171 [2024-12-09 05:11:49.936454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.171 [2024-12-09 05:11:49.937974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.743 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.743 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:36.743 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.743 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.744 [2024-12-09 05:11:50.644153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:36.744 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:37.004 Malloc1 00:18:37.004 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:37.265 Malloc2 00:18:37.265 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:37.526 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:37.787 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.787 [2024-12-09 05:11:51.695017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.787 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:37.787 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf402ad9-d669-474e-a8ca-62b7e37f6de6 -a 10.0.0.2 -s 4420 -i 4 00:18:38.049 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:38.049 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:38.049 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.049 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:38.049 05:11:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.963 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:40.223 [ 0]:0x1 00:18:40.223 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:40.223 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:40.223 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d22ed21d58bf4097a85adead0b79fb8e 00:18:40.223 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d22ed21d58bf4097a85adead0b79fb8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:40.223 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:40.223 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:40.223 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:40.544 [ 0]:0x1 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d22ed21d58bf4097a85adead0b79fb8e 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d22ed21d58bf4097a85adead0b79fb8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:40.544 [ 1]:0x2 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.544 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:40.804 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:40.804 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:40.804 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf402ad9-d669-474e-a8ca-62b7e37f6de6 -a 10.0.0.2 -s 4420 -i 4 00:18:41.064 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:41.064 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:41.065 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.065 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:41.065 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:41.065 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.609 [ 0]:0x2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:43.609 [ 0]:0x1 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d22ed21d58bf4097a85adead0b79fb8e 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d22ed21d58bf4097a85adead0b79fb8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:43.609 [ 1]:0x2 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.609 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.870 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:43.871 [ 0]:0x2 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.871 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:44.131 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:44.131 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf402ad9-d669-474e-a8ca-62b7e37f6de6 -a 10.0.0.2 -s 4420 -i 4 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:44.392 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:46.306 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:46.566 [ 0]:0x1 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d22ed21d58bf4097a85adead0b79fb8e 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d22ed21d58bf4097a85adead0b79fb8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.566 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:46.567 [ 1]:0x2 00:18:46.567 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:46.567 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.567 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:46.567 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.567 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:46.827 [ 0]:0x2 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.827 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:46.828 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:47.088 [2024-12-09 05:12:00.908583] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:47.088 request: 00:18:47.088 { 00:18:47.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.088 "nsid": 2, 00:18:47.088 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.088 "method": "nvmf_ns_remove_host", 00:18:47.088 "req_id": 1 00:18:47.088 } 00:18:47.088 Got JSON-RPC error response 00:18:47.088 response: 00:18:47.088 { 00:18:47.088 "code": -32602, 00:18:47.088 "message": "Invalid parameters" 00:18:47.088 } 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:47.088 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:47.088 [ 0]:0x2 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24e94fc3dca24772af6f4741b3c236e8 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24e94fc3dca24772af6f4741b3c236e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:47.088 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:47.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1530047 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1530047 /var/tmp/host.sock 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1530047 ']' 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:47.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.349 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:47.349 [2024-12-09 05:12:01.208318] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:47.349 [2024-12-09 05:12:01.208429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530047 ] 00:18:47.609 [2024-12-09 05:12:01.349397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.609 [2024-12-09 05:12:01.447991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.179 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.179 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:48.179 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.438 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 29a9a0a4-e0d3-42f8-b8ec-73e9a1698f66 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 29A9A0A4E0D342F8B8EC73E9A1698F66 -i 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 413012f1-93dc-4fbe-9ed2-1fc9ba52a84f 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:48.699 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 413012F193DC4FBE9ED21FC9BA52A84F -i 00:18:48.973 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.300 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:49.300 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:49.300 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:49.576 nvme0n1 00:18:49.576 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:49.576 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:50.213 nvme1n2 00:18:50.213 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:50.213 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:50.213 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:50.213 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:50.213 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:50.213 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:50.213 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:50.213 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:50.213 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:50.472 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 29a9a0a4-e0d3-42f8-b8ec-73e9a1698f66 == \2\9\a\9\a\0\a\4\-\e\0\d\3\-\4\2\f\8\-\b\8\e\c\-\7\3\e\9\a\1\6\9\8\f\6\6 ]] 00:18:50.472 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:50.473 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:50.473 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:50.731 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 413012f1-93dc-4fbe-9ed2-1fc9ba52a84f == \4\1\3\0\1\2\f\1\-\9\3\d\c\-\4\f\b\e\-\9\e\d\2\-\1\f\c\9\b\a\5\2\a\8\4\f ]] 00:18:50.732 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.732 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 29a9a0a4-e0d3-42f8-b8ec-73e9a1698f66 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 29A9A0A4E0D342F8B8EC73E9A1698F66 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 29A9A0A4E0D342F8B8EC73E9A1698F66 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:51.004 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 29A9A0A4E0D342F8B8EC73E9A1698F66 00:18:51.004 [2024-12-09 05:12:04.988001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:51.004 [2024-12-09 05:12:04.988040] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:51.004 [2024-12-09 05:12:04.988054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.004 request: 00:18:51.004 { 00:18:51.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.004 "namespace": { 00:18:51.004 "bdev_name": "invalid", 00:18:51.004 "nsid": 1, 00:18:51.004 "nguid": "29A9A0A4E0D342F8B8EC73E9A1698F66", 00:18:51.004 "no_auto_visible": false, 00:18:51.004 "hide_metadata": false 00:18:51.004 }, 00:18:51.004 "method": "nvmf_subsystem_add_ns", 00:18:51.004 "req_id": 1 00:18:51.004 } 00:18:51.004 Got JSON-RPC error response 00:18:51.004 response: 00:18:51.004 { 00:18:51.004 "code": -32602, 00:18:51.004 "message": "Invalid parameters" 00:18:51.004 } 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 29a9a0a4-e0d3-42f8-b8ec-73e9a1698f66 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 29A9A0A4E0D342F8B8EC73E9A1698F66 -i 00:18:51.263 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1530047 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1530047 ']' 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1530047 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1530047 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1530047' 00:18:53.812 killing process with pid 1530047 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1530047 00:18:53.812 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1530047 00:18:54.750 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.010 rmmod nvme_tcp 00:18:55.010 rmmod nvme_fabrics 00:18:55.010 rmmod nvme_keyring 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1527511 ']' 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1527511 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1527511 ']' 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1527511 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527511 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527511' 00:18:55.010 killing process with pid 1527511 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1527511 00:18:55.010 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1527511 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.951 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.863 00:18:57.863 real 0m30.044s 00:18:57.863 user 0m34.842s 00:18:57.863 sys 0m8.455s 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:57.863 ************************************ 00:18:57.863 END TEST nvmf_ns_masking 00:18:57.863 ************************************ 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.863 ************************************ 00:18:57.863 START TEST nvmf_nvme_cli 00:18:57.863 ************************************ 00:18:57.863 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:58.125 * Looking for test storage... 00:18:58.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.125 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.125 --rc genhtml_branch_coverage=1 00:18:58.125 --rc genhtml_function_coverage=1 00:18:58.125 --rc genhtml_legend=1 00:18:58.125 --rc geninfo_all_blocks=1 00:18:58.125 --rc geninfo_unexecuted_blocks=1 00:18:58.125 00:18:58.125 ' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.125 --rc genhtml_branch_coverage=1 00:18:58.125 --rc genhtml_function_coverage=1 00:18:58.125 --rc genhtml_legend=1 00:18:58.125 --rc geninfo_all_blocks=1 00:18:58.125 --rc geninfo_unexecuted_blocks=1 00:18:58.125 00:18:58.125 ' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.125 --rc genhtml_branch_coverage=1 00:18:58.125 --rc genhtml_function_coverage=1 00:18:58.125 --rc genhtml_legend=1 00:18:58.125 --rc geninfo_all_blocks=1 00:18:58.125 --rc geninfo_unexecuted_blocks=1 00:18:58.125 00:18:58.125 ' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.125 --rc genhtml_branch_coverage=1 00:18:58.125 --rc genhtml_function_coverage=1 00:18:58.125 --rc genhtml_legend=1 00:18:58.125 --rc geninfo_all_blocks=1 00:18:58.125 --rc geninfo_unexecuted_blocks=1 00:18:58.125 00:18:58.125 ' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.125 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.126 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.263 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:06.264 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:06.264 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:06.264 Found net devices under 0000:31:00.0: cvl_0_0 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:06.264 Found net devices under 0000:31:00.1: cvl_0_1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:19:06.264 00:19:06.264 --- 10.0.0.2 ping statistics --- 00:19:06.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.264 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:19:06.264 00:19:06.264 --- 10.0.0.1 ping statistics --- 00:19:06.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.264 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1536339 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1536339 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1536339 ']' 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.264 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.265 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.265 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.265 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.265 [2024-12-09 05:12:19.663774] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:06.265 [2024-12-09 05:12:19.663906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.265 [2024-12-09 05:12:19.835170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.265 [2024-12-09 05:12:19.972336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.265 [2024-12-09 05:12:19.972406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.265 [2024-12-09 05:12:19.972419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.265 [2024-12-09 05:12:19.972432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.265 [2024-12-09 05:12:19.972443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.265 [2024-12-09 05:12:19.975514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.265 [2024-12-09 05:12:19.975648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.265 [2024-12-09 05:12:19.975756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.265 [2024-12-09 05:12:19.975782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.528 [2024-12-09 05:12:20.502826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.528 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 Malloc0 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 Malloc1 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 [2024-12-09 05:12:20.696859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.788 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:19:07.048 00:19:07.048 Discovery Log Number of Records 2, Generation counter 2 00:19:07.048 =====Discovery Log Entry 0====== 00:19:07.048 trtype: tcp 00:19:07.048 adrfam: ipv4 00:19:07.048 subtype: current discovery subsystem 00:19:07.048 treq: not required 00:19:07.048 portid: 0 00:19:07.048 trsvcid: 4420 00:19:07.048 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:07.048 traddr: 10.0.0.2 00:19:07.048 eflags: explicit discovery connections, duplicate discovery information 00:19:07.048 sectype: none 00:19:07.048 =====Discovery Log Entry 1====== 00:19:07.048 trtype: tcp 00:19:07.048 adrfam: ipv4 00:19:07.048 subtype: nvme subsystem 00:19:07.048 treq: not required 00:19:07.048 portid: 0 00:19:07.048 trsvcid: 4420 00:19:07.048 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:07.048 traddr: 10.0.0.2 00:19:07.048 eflags: none 00:19:07.048 sectype: none 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:07.048 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:08.430 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:10.968 /dev/nvme0n2 ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:10.968 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:10.969 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.228 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.228 rmmod nvme_tcp 00:19:11.228 rmmod nvme_fabrics 00:19:11.228 rmmod nvme_keyring 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1536339 ']' 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1536339 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1536339 ']' 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1536339 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536339 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536339' 00:19:11.488 killing process with pid 1536339 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1536339 00:19:11.488 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1536339 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.428 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.339 00:19:14.339 real 0m16.350s 00:19:14.339 user 0m26.023s 00:19:14.339 sys 0m6.566s 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:14.339 ************************************ 00:19:14.339 END TEST nvmf_nvme_cli 00:19:14.339 ************************************ 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.339 ************************************ 00:19:14.339 START TEST nvmf_auth_target 00:19:14.339 ************************************ 00:19:14.339 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:14.339 * Looking for test storage... 00:19:14.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.599 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.600 --rc genhtml_branch_coverage=1 00:19:14.600 --rc genhtml_function_coverage=1 00:19:14.600 --rc genhtml_legend=1 00:19:14.600 --rc geninfo_all_blocks=1 00:19:14.600 --rc geninfo_unexecuted_blocks=1 00:19:14.600 00:19:14.600 ' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.600 --rc genhtml_branch_coverage=1 00:19:14.600 --rc genhtml_function_coverage=1 00:19:14.600 --rc genhtml_legend=1 00:19:14.600 --rc geninfo_all_blocks=1 00:19:14.600 --rc geninfo_unexecuted_blocks=1 00:19:14.600 00:19:14.600 ' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.600 --rc genhtml_branch_coverage=1 00:19:14.600 --rc genhtml_function_coverage=1 00:19:14.600 --rc genhtml_legend=1 00:19:14.600 --rc geninfo_all_blocks=1 00:19:14.600 --rc geninfo_unexecuted_blocks=1 00:19:14.600 00:19:14.600 ' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.600 --rc genhtml_branch_coverage=1 00:19:14.600 --rc genhtml_function_coverage=1 00:19:14.600 --rc genhtml_legend=1 00:19:14.600 --rc geninfo_all_blocks=1 00:19:14.600 --rc geninfo_unexecuted_blocks=1 00:19:14.600 00:19:14.600 ' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.600 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:22.742 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:22.743 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:22.743 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:22.743 Found net devices under 0000:31:00.0: cvl_0_0 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:22.743 Found net devices under 0000:31:00.1: cvl_0_1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:22.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:19:22.743 00:19:22.743 --- 10.0.0.2 ping statistics --- 00:19:22.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.743 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:22.743 00:19:22.743 --- 10.0.0.1 ping statistics --- 00:19:22.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.743 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.743 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1541844 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1541844 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1541844 ']' 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.743 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1542112 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f45406bfa80a8314602eaa6ba446c510c54871030603f001 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.a94 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f45406bfa80a8314602eaa6ba446c510c54871030603f001 0 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f45406bfa80a8314602eaa6ba446c510c54871030603f001 0 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f45406bfa80a8314602eaa6ba446c510c54871030603f001 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:23.004 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.a94 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.a94 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.a94 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b84a497beb9ba6c528ce711e38408739265b2c941bf5f593e1d898a03e7961ea 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.x6q 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b84a497beb9ba6c528ce711e38408739265b2c941bf5f593e1d898a03e7961ea 3 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b84a497beb9ba6c528ce711e38408739265b2c941bf5f593e1d898a03e7961ea 3 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b84a497beb9ba6c528ce711e38408739265b2c941bf5f593e1d898a03e7961ea 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.x6q 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.x6q 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.x6q 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=864b5503d84769d1a0d9e5d16ae94878 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.91S 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 864b5503d84769d1a0d9e5d16ae94878 1 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 864b5503d84769d1a0d9e5d16ae94878 1 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=864b5503d84769d1a0d9e5d16ae94878 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.91S 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.91S 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.91S 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=29d18f1b7d00ff570fd20dd82a3f7f44b9b5e2ec73be74bd 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.266 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.h3q 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 29d18f1b7d00ff570fd20dd82a3f7f44b9b5e2ec73be74bd 2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 29d18f1b7d00ff570fd20dd82a3f7f44b9b5e2ec73be74bd 2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=29d18f1b7d00ff570fd20dd82a3f7f44b9b5e2ec73be74bd 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.h3q 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.h3q 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.h3q 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=17bb009220b11a5cf5f0945b9458e9e5fbaa3c430ddd26b8 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FcT 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 17bb009220b11a5cf5f0945b9458e9e5fbaa3c430ddd26b8 2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 17bb009220b11a5cf5f0945b9458e9e5fbaa3c430ddd26b8 2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=17bb009220b11a5cf5f0945b9458e9e5fbaa3c430ddd26b8 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:23.267 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FcT 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FcT 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FcT 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a121f2753b782a5eee6af9715cfef0a3 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.abN 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a121f2753b782a5eee6af9715cfef0a3 1 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a121f2753b782a5eee6af9715cfef0a3 1 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a121f2753b782a5eee6af9715cfef0a3 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.abN 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.abN 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.abN 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=225356232d15b5985f79ab63dce7e90901c5ba7416d8c6e885ba7f5ac4af142e 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H6m 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 225356232d15b5985f79ab63dce7e90901c5ba7416d8c6e885ba7f5ac4af142e 3 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 225356232d15b5985f79ab63dce7e90901c5ba7416d8c6e885ba7f5ac4af142e 3 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:23.529 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=225356232d15b5985f79ab63dce7e90901c5ba7416d8c6e885ba7f5ac4af142e 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H6m 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H6m 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.H6m 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1541844 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1541844 ']' 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.530 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1542112 /var/tmp/host.sock 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1542112 ']' 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:23.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.791 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a94 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.a94 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.a94 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.x6q ]] 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x6q 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x6q 00:19:24.362 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x6q 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.91S 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.91S 00:19:24.623 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.91S 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.h3q ]] 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h3q 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h3q 00:19:24.883 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h3q 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FcT 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FcT 00:19:25.144 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FcT 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.abN ]] 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.abN 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.abN 00:19:25.144 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.abN 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H6m 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.H6m 00:19:25.405 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.H6m 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.665 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.666 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.666 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.666 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.666 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.926 00:19:25.926 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.926 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.926 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.186 { 00:19:26.186 "cntlid": 1, 00:19:26.186 "qid": 0, 00:19:26.186 "state": "enabled", 00:19:26.186 "thread": "nvmf_tgt_poll_group_000", 00:19:26.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:26.186 "listen_address": { 00:19:26.186 "trtype": "TCP", 00:19:26.186 "adrfam": "IPv4", 00:19:26.186 "traddr": "10.0.0.2", 00:19:26.186 "trsvcid": "4420" 00:19:26.186 }, 00:19:26.186 "peer_address": { 00:19:26.186 "trtype": "TCP", 00:19:26.186 "adrfam": "IPv4", 00:19:26.186 "traddr": "10.0.0.1", 00:19:26.186 "trsvcid": "44222" 00:19:26.186 }, 00:19:26.186 "auth": { 00:19:26.186 "state": "completed", 00:19:26.186 "digest": "sha256", 00:19:26.186 "dhgroup": "null" 00:19:26.186 } 00:19:26.186 } 00:19:26.186 ]' 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.186 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.446 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.446 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.446 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.446 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:26.446 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.387 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.648 00:19:27.648 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.648 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.648 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.909 { 00:19:27.909 "cntlid": 3, 00:19:27.909 "qid": 0, 00:19:27.909 "state": "enabled", 00:19:27.909 "thread": "nvmf_tgt_poll_group_000", 00:19:27.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:27.909 "listen_address": { 00:19:27.909 "trtype": "TCP", 00:19:27.909 "adrfam": "IPv4", 00:19:27.909 "traddr": "10.0.0.2", 00:19:27.909 "trsvcid": "4420" 00:19:27.909 }, 00:19:27.909 "peer_address": { 00:19:27.909 "trtype": "TCP", 00:19:27.909 "adrfam": "IPv4", 00:19:27.909 "traddr": "10.0.0.1", 00:19:27.909 "trsvcid": "44246" 00:19:27.909 }, 00:19:27.909 "auth": { 00:19:27.909 "state": "completed", 00:19:27.909 "digest": "sha256", 00:19:27.909 "dhgroup": "null" 00:19:27.909 } 00:19:27.909 } 00:19:27.909 ]' 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.909 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.169 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:28.169 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.741 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.001 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.002 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.263 00:19:29.263 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.263 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.263 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.523 { 00:19:29.523 "cntlid": 5, 00:19:29.523 "qid": 0, 00:19:29.523 "state": "enabled", 00:19:29.523 "thread": "nvmf_tgt_poll_group_000", 00:19:29.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:29.523 "listen_address": { 00:19:29.523 "trtype": "TCP", 00:19:29.523 "adrfam": "IPv4", 00:19:29.523 "traddr": "10.0.0.2", 00:19:29.523 "trsvcid": "4420" 00:19:29.523 }, 00:19:29.523 "peer_address": { 00:19:29.523 "trtype": "TCP", 00:19:29.523 "adrfam": "IPv4", 00:19:29.523 "traddr": "10.0.0.1", 00:19:29.523 "trsvcid": "42192" 00:19:29.523 }, 00:19:29.523 "auth": { 00:19:29.523 "state": "completed", 00:19:29.523 "digest": "sha256", 00:19:29.523 "dhgroup": "null" 00:19:29.523 } 00:19:29.523 } 00:19:29.523 ]' 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.523 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.783 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:29.783 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.352 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.612 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.871 00:19:30.871 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.871 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.871 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.131 { 00:19:31.131 "cntlid": 7, 00:19:31.131 "qid": 0, 00:19:31.131 "state": "enabled", 00:19:31.131 "thread": "nvmf_tgt_poll_group_000", 00:19:31.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:31.131 "listen_address": { 00:19:31.131 "trtype": "TCP", 00:19:31.131 "adrfam": "IPv4", 00:19:31.131 "traddr": "10.0.0.2", 00:19:31.131 "trsvcid": "4420" 00:19:31.131 }, 00:19:31.131 "peer_address": { 00:19:31.131 "trtype": "TCP", 00:19:31.131 "adrfam": "IPv4", 00:19:31.131 "traddr": "10.0.0.1", 00:19:31.131 "trsvcid": "42226" 00:19:31.131 }, 00:19:31.131 "auth": { 00:19:31.131 "state": "completed", 00:19:31.131 "digest": "sha256", 00:19:31.131 "dhgroup": "null" 00:19:31.131 } 00:19:31.131 } 00:19:31.131 ]' 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.131 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.131 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.131 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.131 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.131 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.131 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.390 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:31.390 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.960 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.219 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.220 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.479 00:19:32.479 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.479 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.479 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.740 { 00:19:32.740 "cntlid": 9, 00:19:32.740 "qid": 0, 00:19:32.740 "state": "enabled", 00:19:32.740 "thread": "nvmf_tgt_poll_group_000", 00:19:32.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:32.740 "listen_address": { 00:19:32.740 "trtype": "TCP", 00:19:32.740 "adrfam": "IPv4", 00:19:32.740 "traddr": "10.0.0.2", 00:19:32.740 "trsvcid": "4420" 00:19:32.740 }, 00:19:32.740 "peer_address": { 00:19:32.740 "trtype": "TCP", 00:19:32.740 "adrfam": "IPv4", 00:19:32.740 "traddr": "10.0.0.1", 00:19:32.740 "trsvcid": "42262" 00:19:32.740 }, 00:19:32.740 "auth": { 00:19:32.740 "state": "completed", 00:19:32.740 "digest": "sha256", 00:19:32.740 "dhgroup": "ffdhe2048" 00:19:32.740 } 00:19:32.740 } 00:19:32.740 ]' 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.740 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.000 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:33.000 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.569 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.828 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.087 00:19:34.087 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.087 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.087 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.347 { 00:19:34.347 "cntlid": 11, 00:19:34.347 "qid": 0, 00:19:34.347 "state": "enabled", 00:19:34.347 "thread": "nvmf_tgt_poll_group_000", 00:19:34.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:34.347 "listen_address": { 00:19:34.347 "trtype": "TCP", 00:19:34.347 "adrfam": "IPv4", 00:19:34.347 "traddr": "10.0.0.2", 00:19:34.347 "trsvcid": "4420" 00:19:34.347 }, 00:19:34.347 "peer_address": { 00:19:34.347 "trtype": "TCP", 00:19:34.347 "adrfam": "IPv4", 00:19:34.347 "traddr": "10.0.0.1", 00:19:34.347 "trsvcid": "42296" 00:19:34.347 }, 00:19:34.347 "auth": { 00:19:34.347 "state": "completed", 00:19:34.347 "digest": "sha256", 00:19:34.347 "dhgroup": "ffdhe2048" 00:19:34.347 } 00:19:34.347 } 00:19:34.347 ]' 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.347 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.608 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:34.608 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.176 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.437 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.697 00:19:35.697 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.697 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.697 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.957 { 00:19:35.957 "cntlid": 13, 00:19:35.957 "qid": 0, 00:19:35.957 "state": "enabled", 00:19:35.957 "thread": "nvmf_tgt_poll_group_000", 00:19:35.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:35.957 "listen_address": { 00:19:35.957 "trtype": "TCP", 00:19:35.957 "adrfam": "IPv4", 00:19:35.957 "traddr": "10.0.0.2", 00:19:35.957 "trsvcid": "4420" 00:19:35.957 }, 00:19:35.957 "peer_address": { 00:19:35.957 "trtype": "TCP", 00:19:35.957 "adrfam": "IPv4", 00:19:35.957 "traddr": "10.0.0.1", 00:19:35.957 "trsvcid": "42324" 00:19:35.957 }, 00:19:35.957 "auth": { 00:19:35.957 "state": "completed", 00:19:35.957 "digest": "sha256", 00:19:35.957 "dhgroup": "ffdhe2048" 00:19:35.957 } 00:19:35.957 } 00:19:35.957 ]' 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.957 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.958 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.958 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.217 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:36.217 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.787 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.047 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.307 00:19:37.307 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.307 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.307 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.566 { 00:19:37.566 "cntlid": 15, 00:19:37.566 "qid": 0, 00:19:37.566 "state": "enabled", 00:19:37.566 "thread": "nvmf_tgt_poll_group_000", 00:19:37.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:37.566 "listen_address": { 00:19:37.566 "trtype": "TCP", 00:19:37.566 "adrfam": "IPv4", 00:19:37.566 "traddr": "10.0.0.2", 00:19:37.566 "trsvcid": "4420" 00:19:37.566 }, 00:19:37.566 "peer_address": { 00:19:37.566 "trtype": "TCP", 00:19:37.566 "adrfam": "IPv4", 00:19:37.566 "traddr": "10.0.0.1", 00:19:37.566 "trsvcid": "42354" 00:19:37.566 }, 00:19:37.566 "auth": { 00:19:37.566 "state": "completed", 00:19:37.566 "digest": "sha256", 00:19:37.566 "dhgroup": "ffdhe2048" 00:19:37.566 } 00:19:37.566 } 00:19:37.566 ]' 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.566 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.825 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:37.825 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:38.395 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.395 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:38.395 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.395 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.396 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.396 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.396 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.396 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.396 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.655 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.656 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.656 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.656 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.916 00:19:38.916 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.916 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.916 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.176 { 00:19:39.176 "cntlid": 17, 00:19:39.176 "qid": 0, 00:19:39.176 "state": "enabled", 00:19:39.176 "thread": "nvmf_tgt_poll_group_000", 00:19:39.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:39.176 "listen_address": { 00:19:39.176 "trtype": "TCP", 00:19:39.176 "adrfam": "IPv4", 00:19:39.176 "traddr": "10.0.0.2", 00:19:39.176 "trsvcid": "4420" 00:19:39.176 }, 00:19:39.176 "peer_address": { 00:19:39.176 "trtype": "TCP", 00:19:39.176 "adrfam": "IPv4", 00:19:39.176 "traddr": "10.0.0.1", 00:19:39.176 "trsvcid": "42394" 00:19:39.176 }, 00:19:39.176 "auth": { 00:19:39.176 "state": "completed", 00:19:39.176 "digest": "sha256", 00:19:39.176 "dhgroup": "ffdhe3072" 00:19:39.176 } 00:19:39.176 } 00:19:39.176 ]' 00:19:39.176 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.176 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.437 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:39.437 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.009 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.269 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.530 00:19:40.530 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.530 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.530 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.791 { 00:19:40.791 "cntlid": 19, 00:19:40.791 "qid": 0, 00:19:40.791 "state": "enabled", 00:19:40.791 "thread": "nvmf_tgt_poll_group_000", 00:19:40.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:40.791 "listen_address": { 00:19:40.791 "trtype": "TCP", 00:19:40.791 "adrfam": "IPv4", 00:19:40.791 "traddr": "10.0.0.2", 00:19:40.791 "trsvcid": "4420" 00:19:40.791 }, 00:19:40.791 "peer_address": { 00:19:40.791 "trtype": "TCP", 00:19:40.791 "adrfam": "IPv4", 00:19:40.791 "traddr": "10.0.0.1", 00:19:40.791 "trsvcid": "40660" 00:19:40.791 }, 00:19:40.791 "auth": { 00:19:40.791 "state": "completed", 00:19:40.791 "digest": "sha256", 00:19:40.791 "dhgroup": "ffdhe3072" 00:19:40.791 } 00:19:40.791 } 00:19:40.791 ]' 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.791 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.052 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:41.052 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.620 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.881 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.141 00:19:42.141 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.141 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.141 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.400 { 00:19:42.400 "cntlid": 21, 00:19:42.400 "qid": 0, 00:19:42.400 "state": "enabled", 00:19:42.400 "thread": "nvmf_tgt_poll_group_000", 00:19:42.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:42.400 "listen_address": { 00:19:42.400 "trtype": "TCP", 00:19:42.400 "adrfam": "IPv4", 00:19:42.400 "traddr": "10.0.0.2", 00:19:42.400 "trsvcid": "4420" 00:19:42.400 }, 00:19:42.400 "peer_address": { 00:19:42.400 "trtype": "TCP", 00:19:42.400 "adrfam": "IPv4", 00:19:42.400 "traddr": "10.0.0.1", 00:19:42.400 "trsvcid": "40692" 00:19:42.400 }, 00:19:42.400 "auth": { 00:19:42.400 "state": "completed", 00:19:42.400 "digest": "sha256", 00:19:42.400 "dhgroup": "ffdhe3072" 00:19:42.400 } 00:19:42.400 } 00:19:42.400 ]' 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.400 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.659 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:42.659 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.229 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.490 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.750 00:19:43.750 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.750 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.750 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.010 { 00:19:44.010 "cntlid": 23, 00:19:44.010 "qid": 0, 00:19:44.010 "state": "enabled", 00:19:44.010 "thread": "nvmf_tgt_poll_group_000", 00:19:44.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:44.010 "listen_address": { 00:19:44.010 "trtype": "TCP", 00:19:44.010 "adrfam": "IPv4", 00:19:44.010 "traddr": "10.0.0.2", 00:19:44.010 "trsvcid": "4420" 00:19:44.010 }, 00:19:44.010 "peer_address": { 00:19:44.010 "trtype": "TCP", 00:19:44.010 "adrfam": "IPv4", 00:19:44.010 "traddr": "10.0.0.1", 00:19:44.010 "trsvcid": "40716" 00:19:44.010 }, 00:19:44.010 "auth": { 00:19:44.010 "state": "completed", 00:19:44.010 "digest": "sha256", 00:19:44.010 "dhgroup": "ffdhe3072" 00:19:44.010 } 00:19:44.010 } 00:19:44.010 ]' 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.010 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.269 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:44.269 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.839 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.100 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.100 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.100 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.100 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.101 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.361 00:19:45.361 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.361 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.361 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.620 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.620 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.620 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.621 { 00:19:45.621 "cntlid": 25, 00:19:45.621 "qid": 0, 00:19:45.621 "state": "enabled", 00:19:45.621 "thread": "nvmf_tgt_poll_group_000", 00:19:45.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:45.621 "listen_address": { 00:19:45.621 "trtype": "TCP", 00:19:45.621 "adrfam": "IPv4", 00:19:45.621 "traddr": "10.0.0.2", 00:19:45.621 "trsvcid": "4420" 00:19:45.621 }, 00:19:45.621 "peer_address": { 00:19:45.621 "trtype": "TCP", 00:19:45.621 "adrfam": "IPv4", 00:19:45.621 "traddr": "10.0.0.1", 00:19:45.621 "trsvcid": "40740" 00:19:45.621 }, 00:19:45.621 "auth": { 00:19:45.621 "state": "completed", 00:19:45.621 "digest": "sha256", 00:19:45.621 "dhgroup": "ffdhe4096" 00:19:45.621 } 00:19:45.621 } 00:19:45.621 ]' 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.621 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.881 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:45.881 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:46.453 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.714 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.974 00:19:46.974 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.974 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.974 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.232 { 00:19:47.232 "cntlid": 27, 00:19:47.232 "qid": 0, 00:19:47.232 "state": "enabled", 00:19:47.232 "thread": "nvmf_tgt_poll_group_000", 00:19:47.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:47.232 "listen_address": { 00:19:47.232 "trtype": "TCP", 00:19:47.232 "adrfam": "IPv4", 00:19:47.232 "traddr": "10.0.0.2", 00:19:47.232 "trsvcid": "4420" 00:19:47.232 }, 00:19:47.232 "peer_address": { 00:19:47.232 "trtype": "TCP", 00:19:47.232 "adrfam": "IPv4", 00:19:47.232 "traddr": "10.0.0.1", 00:19:47.232 "trsvcid": "40754" 00:19:47.232 }, 00:19:47.232 "auth": { 00:19:47.232 "state": "completed", 00:19:47.232 "digest": "sha256", 00:19:47.232 "dhgroup": "ffdhe4096" 00:19:47.232 } 00:19:47.232 } 00:19:47.232 ]' 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.232 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.491 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.491 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.491 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.491 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:47.491 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.430 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.690 00:19:48.690 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.690 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.690 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.949 { 00:19:48.949 "cntlid": 29, 00:19:48.949 "qid": 0, 00:19:48.949 "state": "enabled", 00:19:48.949 "thread": "nvmf_tgt_poll_group_000", 00:19:48.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:48.949 "listen_address": { 00:19:48.949 "trtype": "TCP", 00:19:48.949 "adrfam": "IPv4", 00:19:48.949 "traddr": "10.0.0.2", 00:19:48.949 "trsvcid": "4420" 00:19:48.949 }, 00:19:48.949 "peer_address": { 00:19:48.949 "trtype": "TCP", 00:19:48.949 "adrfam": "IPv4", 00:19:48.949 "traddr": "10.0.0.1", 00:19:48.949 "trsvcid": "40762" 00:19:48.949 }, 00:19:48.949 "auth": { 00:19:48.949 "state": "completed", 00:19:48.949 "digest": "sha256", 00:19:48.949 "dhgroup": "ffdhe4096" 00:19:48.949 } 00:19:48.949 } 00:19:48.949 ]' 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.949 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.208 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:49.208 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.777 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.037 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.296 00:19:50.296 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.296 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.297 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.556 { 00:19:50.556 "cntlid": 31, 00:19:50.556 "qid": 0, 00:19:50.556 "state": "enabled", 00:19:50.556 "thread": "nvmf_tgt_poll_group_000", 00:19:50.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:50.556 "listen_address": { 00:19:50.556 "trtype": "TCP", 00:19:50.556 "adrfam": "IPv4", 00:19:50.556 "traddr": "10.0.0.2", 00:19:50.556 "trsvcid": "4420" 00:19:50.556 }, 00:19:50.556 "peer_address": { 00:19:50.556 "trtype": "TCP", 00:19:50.556 "adrfam": "IPv4", 00:19:50.556 "traddr": "10.0.0.1", 00:19:50.556 "trsvcid": "33164" 00:19:50.556 }, 00:19:50.556 "auth": { 00:19:50.556 "state": "completed", 00:19:50.556 "digest": "sha256", 00:19:50.556 "dhgroup": "ffdhe4096" 00:19:50.556 } 00:19:50.556 } 00:19:50.556 ]' 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.556 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.815 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:50.815 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:51.384 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.643 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.915 00:19:52.175 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.175 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.175 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.175 { 00:19:52.175 "cntlid": 33, 00:19:52.175 "qid": 0, 00:19:52.175 "state": "enabled", 00:19:52.175 "thread": "nvmf_tgt_poll_group_000", 00:19:52.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:52.175 "listen_address": { 00:19:52.175 "trtype": "TCP", 00:19:52.175 "adrfam": "IPv4", 00:19:52.175 "traddr": "10.0.0.2", 00:19:52.175 "trsvcid": "4420" 00:19:52.175 }, 00:19:52.175 "peer_address": { 00:19:52.175 "trtype": "TCP", 00:19:52.175 "adrfam": "IPv4", 00:19:52.175 "traddr": "10.0.0.1", 00:19:52.175 "trsvcid": "33186" 00:19:52.175 }, 00:19:52.175 "auth": { 00:19:52.175 "state": "completed", 00:19:52.175 "digest": "sha256", 00:19:52.175 "dhgroup": "ffdhe6144" 00:19:52.175 } 00:19:52.175 } 00:19:52.175 ]' 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.175 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.176 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:52.435 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:53.375 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.375 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:53.375 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.375 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.376 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.636 00:19:53.636 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.636 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.636 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.896 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.896 { 00:19:53.896 "cntlid": 35, 00:19:53.896 "qid": 0, 00:19:53.897 "state": "enabled", 00:19:53.897 "thread": "nvmf_tgt_poll_group_000", 00:19:53.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:53.897 "listen_address": { 00:19:53.897 "trtype": "TCP", 00:19:53.897 "adrfam": "IPv4", 00:19:53.897 "traddr": "10.0.0.2", 00:19:53.897 "trsvcid": "4420" 00:19:53.897 }, 00:19:53.897 "peer_address": { 00:19:53.897 "trtype": "TCP", 00:19:53.897 "adrfam": "IPv4", 00:19:53.897 "traddr": "10.0.0.1", 00:19:53.897 "trsvcid": "33208" 00:19:53.897 }, 00:19:53.897 "auth": { 00:19:53.897 "state": "completed", 00:19:53.897 "digest": "sha256", 00:19:53.897 "dhgroup": "ffdhe6144" 00:19:53.897 } 00:19:53.897 } 00:19:53.897 ]' 00:19:53.897 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.897 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.897 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.897 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.897 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.157 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.157 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.157 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.157 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:54.157 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:19:54.726 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.987 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.248 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.508 { 00:19:55.508 "cntlid": 37, 00:19:55.508 "qid": 0, 00:19:55.508 "state": "enabled", 00:19:55.508 "thread": "nvmf_tgt_poll_group_000", 00:19:55.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:55.508 "listen_address": { 00:19:55.508 "trtype": "TCP", 00:19:55.508 "adrfam": "IPv4", 00:19:55.508 "traddr": "10.0.0.2", 00:19:55.508 "trsvcid": "4420" 00:19:55.508 }, 00:19:55.508 "peer_address": { 00:19:55.508 "trtype": "TCP", 00:19:55.508 "adrfam": "IPv4", 00:19:55.508 "traddr": "10.0.0.1", 00:19:55.508 "trsvcid": "33238" 00:19:55.508 }, 00:19:55.508 "auth": { 00:19:55.508 "state": "completed", 00:19:55.508 "digest": "sha256", 00:19:55.508 "dhgroup": "ffdhe6144" 00:19:55.508 } 00:19:55.508 } 00:19:55.508 ]' 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.508 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:55.770 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.731 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.732 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.992 00:19:57.251 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.251 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.251 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.251 { 00:19:57.251 "cntlid": 39, 00:19:57.251 "qid": 0, 00:19:57.251 "state": "enabled", 00:19:57.251 "thread": "nvmf_tgt_poll_group_000", 00:19:57.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:57.251 "listen_address": { 00:19:57.251 "trtype": "TCP", 00:19:57.251 "adrfam": "IPv4", 00:19:57.251 "traddr": "10.0.0.2", 00:19:57.251 "trsvcid": "4420" 00:19:57.251 }, 00:19:57.251 "peer_address": { 00:19:57.251 "trtype": "TCP", 00:19:57.251 "adrfam": "IPv4", 00:19:57.251 "traddr": "10.0.0.1", 00:19:57.251 "trsvcid": "33268" 00:19:57.251 }, 00:19:57.251 "auth": { 00:19:57.251 "state": "completed", 00:19:57.251 "digest": "sha256", 00:19:57.251 "dhgroup": "ffdhe6144" 00:19:57.251 } 00:19:57.251 } 00:19:57.251 ]' 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.251 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:57.511 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.450 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.019 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.019 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.019 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.019 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.019 { 00:19:59.019 "cntlid": 41, 00:19:59.019 "qid": 0, 00:19:59.019 "state": "enabled", 00:19:59.019 "thread": "nvmf_tgt_poll_group_000", 00:19:59.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:59.019 "listen_address": { 00:19:59.019 "trtype": "TCP", 00:19:59.019 "adrfam": "IPv4", 00:19:59.019 "traddr": "10.0.0.2", 00:19:59.019 "trsvcid": "4420" 00:19:59.019 }, 00:19:59.019 "peer_address": { 00:19:59.019 "trtype": "TCP", 00:19:59.019 "adrfam": "IPv4", 00:19:59.019 "traddr": "10.0.0.1", 00:19:59.019 "trsvcid": "33280" 00:19:59.019 }, 00:19:59.019 "auth": { 00:19:59.019 "state": "completed", 00:19:59.019 "digest": "sha256", 00:19:59.019 "dhgroup": "ffdhe8192" 00:19:59.019 } 00:19:59.019 } 00:19:59.019 ]' 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.279 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.537 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:19:59.537 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.105 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.364 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.933 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.933 { 00:20:00.933 "cntlid": 43, 00:20:00.933 "qid": 0, 00:20:00.933 "state": "enabled", 00:20:00.933 "thread": "nvmf_tgt_poll_group_000", 00:20:00.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:00.933 "listen_address": { 00:20:00.933 "trtype": "TCP", 00:20:00.933 "adrfam": "IPv4", 00:20:00.933 "traddr": "10.0.0.2", 00:20:00.933 "trsvcid": "4420" 00:20:00.933 }, 00:20:00.933 "peer_address": { 00:20:00.933 "trtype": "TCP", 00:20:00.933 "adrfam": "IPv4", 00:20:00.933 "traddr": "10.0.0.1", 00:20:00.933 "trsvcid": "58314" 00:20:00.933 }, 00:20:00.933 "auth": { 00:20:00.933 "state": "completed", 00:20:00.933 "digest": "sha256", 00:20:00.933 "dhgroup": "ffdhe8192" 00:20:00.933 } 00:20:00.933 } 00:20:00.933 ]' 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.933 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.193 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.193 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.193 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.193 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.193 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.193 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:01.193 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:02.130 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.130 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:02.130 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.130 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.130 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.131 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.131 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.131 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.131 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.700 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.700 { 00:20:02.700 "cntlid": 45, 00:20:02.700 "qid": 0, 00:20:02.700 "state": "enabled", 00:20:02.700 "thread": "nvmf_tgt_poll_group_000", 00:20:02.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:02.700 "listen_address": { 00:20:02.700 "trtype": "TCP", 00:20:02.700 "adrfam": "IPv4", 00:20:02.700 "traddr": "10.0.0.2", 00:20:02.700 "trsvcid": "4420" 00:20:02.700 }, 00:20:02.700 "peer_address": { 00:20:02.700 "trtype": "TCP", 00:20:02.700 "adrfam": "IPv4", 00:20:02.700 "traddr": "10.0.0.1", 00:20:02.700 "trsvcid": "58346" 00:20:02.700 }, 00:20:02.700 "auth": { 00:20:02.700 "state": "completed", 00:20:02.700 "digest": "sha256", 00:20:02.700 "dhgroup": "ffdhe8192" 00:20:02.700 } 00:20:02.700 } 00:20:02.700 ]' 00:20:02.700 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.966 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.262 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:03.262 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.880 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.164 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.164 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.164 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.164 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.424 00:20:04.424 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.424 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.424 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.684 { 00:20:04.684 "cntlid": 47, 00:20:04.684 "qid": 0, 00:20:04.684 "state": "enabled", 00:20:04.684 "thread": "nvmf_tgt_poll_group_000", 00:20:04.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:04.684 "listen_address": { 00:20:04.684 "trtype": "TCP", 00:20:04.684 "adrfam": "IPv4", 00:20:04.684 "traddr": "10.0.0.2", 00:20:04.684 "trsvcid": "4420" 00:20:04.684 }, 00:20:04.684 "peer_address": { 00:20:04.684 "trtype": "TCP", 00:20:04.684 "adrfam": "IPv4", 00:20:04.684 "traddr": "10.0.0.1", 00:20:04.684 "trsvcid": "58376" 00:20:04.684 }, 00:20:04.684 "auth": { 00:20:04.684 "state": "completed", 00:20:04.684 "digest": "sha256", 00:20:04.684 "dhgroup": "ffdhe8192" 00:20:04.684 } 00:20:04.684 } 00:20:04.684 ]' 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.684 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.944 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:04.944 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:05.513 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.772 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.033 00:20:06.033 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.033 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.033 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.293 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.293 { 00:20:06.293 "cntlid": 49, 00:20:06.294 "qid": 0, 00:20:06.294 "state": "enabled", 00:20:06.294 "thread": "nvmf_tgt_poll_group_000", 00:20:06.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:06.294 "listen_address": { 00:20:06.294 "trtype": "TCP", 00:20:06.294 "adrfam": "IPv4", 00:20:06.294 "traddr": "10.0.0.2", 00:20:06.294 "trsvcid": "4420" 00:20:06.294 }, 00:20:06.294 "peer_address": { 00:20:06.294 "trtype": "TCP", 00:20:06.294 "adrfam": "IPv4", 00:20:06.294 "traddr": "10.0.0.1", 00:20:06.294 "trsvcid": "58396" 00:20:06.294 }, 00:20:06.294 "auth": { 00:20:06.294 "state": "completed", 00:20:06.294 "digest": "sha384", 00:20:06.294 "dhgroup": "null" 00:20:06.294 } 00:20:06.294 } 00:20:06.294 ]' 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.294 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.554 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:06.554 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:07.124 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.384 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.644 00:20:07.644 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.644 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.644 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.904 { 00:20:07.904 "cntlid": 51, 00:20:07.904 "qid": 0, 00:20:07.904 "state": "enabled", 00:20:07.904 "thread": "nvmf_tgt_poll_group_000", 00:20:07.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:07.904 "listen_address": { 00:20:07.904 "trtype": "TCP", 00:20:07.904 "adrfam": "IPv4", 00:20:07.904 "traddr": "10.0.0.2", 00:20:07.904 "trsvcid": "4420" 00:20:07.904 }, 00:20:07.904 "peer_address": { 00:20:07.904 "trtype": "TCP", 00:20:07.904 "adrfam": "IPv4", 00:20:07.904 "traddr": "10.0.0.1", 00:20:07.904 "trsvcid": "58426" 00:20:07.904 }, 00:20:07.904 "auth": { 00:20:07.904 "state": "completed", 00:20:07.904 "digest": "sha384", 00:20:07.904 "dhgroup": "null" 00:20:07.904 } 00:20:07.904 } 00:20:07.904 ]' 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.904 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.163 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:08.163 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:08.732 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.732 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:08.732 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.732 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.992 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.251 00:20:09.251 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.251 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.251 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.511 { 00:20:09.511 "cntlid": 53, 00:20:09.511 "qid": 0, 00:20:09.511 "state": "enabled", 00:20:09.511 "thread": "nvmf_tgt_poll_group_000", 00:20:09.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:09.511 "listen_address": { 00:20:09.511 "trtype": "TCP", 00:20:09.511 "adrfam": "IPv4", 00:20:09.511 "traddr": "10.0.0.2", 00:20:09.511 "trsvcid": "4420" 00:20:09.511 }, 00:20:09.511 "peer_address": { 00:20:09.511 "trtype": "TCP", 00:20:09.511 "adrfam": "IPv4", 00:20:09.511 "traddr": "10.0.0.1", 00:20:09.511 "trsvcid": "39532" 00:20:09.511 }, 00:20:09.511 "auth": { 00:20:09.511 "state": "completed", 00:20:09.511 "digest": "sha384", 00:20:09.511 "dhgroup": "null" 00:20:09.511 } 00:20:09.511 } 00:20:09.511 ]' 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.511 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:09.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:10.339 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.598 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.858 00:20:10.858 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.858 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.858 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.117 { 00:20:11.117 "cntlid": 55, 00:20:11.117 "qid": 0, 00:20:11.117 "state": "enabled", 00:20:11.117 "thread": "nvmf_tgt_poll_group_000", 00:20:11.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:11.117 "listen_address": { 00:20:11.117 "trtype": "TCP", 00:20:11.117 "adrfam": "IPv4", 00:20:11.117 "traddr": "10.0.0.2", 00:20:11.117 "trsvcid": "4420" 00:20:11.117 }, 00:20:11.117 "peer_address": { 00:20:11.117 "trtype": "TCP", 00:20:11.117 "adrfam": "IPv4", 00:20:11.117 "traddr": "10.0.0.1", 00:20:11.117 "trsvcid": "39558" 00:20:11.117 }, 00:20:11.117 "auth": { 00:20:11.117 "state": "completed", 00:20:11.117 "digest": "sha384", 00:20:11.117 "dhgroup": "null" 00:20:11.117 } 00:20:11.117 } 00:20:11.117 ]' 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.117 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.117 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.117 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.117 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.117 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.117 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.377 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:11.378 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:11.948 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.949 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.209 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.469 00:20:12.469 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.469 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.469 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.729 { 00:20:12.729 "cntlid": 57, 00:20:12.729 "qid": 0, 00:20:12.729 "state": "enabled", 00:20:12.729 "thread": "nvmf_tgt_poll_group_000", 00:20:12.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:12.729 "listen_address": { 00:20:12.729 "trtype": "TCP", 00:20:12.729 "adrfam": "IPv4", 00:20:12.729 "traddr": "10.0.0.2", 00:20:12.729 "trsvcid": "4420" 00:20:12.729 }, 00:20:12.729 "peer_address": { 00:20:12.729 "trtype": "TCP", 00:20:12.729 "adrfam": "IPv4", 00:20:12.729 "traddr": "10.0.0.1", 00:20:12.729 "trsvcid": "39596" 00:20:12.729 }, 00:20:12.729 "auth": { 00:20:12.729 "state": "completed", 00:20:12.729 "digest": "sha384", 00:20:12.729 "dhgroup": "ffdhe2048" 00:20:12.729 } 00:20:12.729 } 00:20:12.729 ]' 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.729 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.990 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:12.990 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.562 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.822 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.083 00:20:14.083 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.083 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.083 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.344 { 00:20:14.344 "cntlid": 59, 00:20:14.344 "qid": 0, 00:20:14.344 "state": "enabled", 00:20:14.344 "thread": "nvmf_tgt_poll_group_000", 00:20:14.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:14.344 "listen_address": { 00:20:14.344 "trtype": "TCP", 00:20:14.344 "adrfam": "IPv4", 00:20:14.344 "traddr": "10.0.0.2", 00:20:14.344 "trsvcid": "4420" 00:20:14.344 }, 00:20:14.344 "peer_address": { 00:20:14.344 "trtype": "TCP", 00:20:14.344 "adrfam": "IPv4", 00:20:14.344 "traddr": "10.0.0.1", 00:20:14.344 "trsvcid": "39618" 00:20:14.344 }, 00:20:14.344 "auth": { 00:20:14.344 "state": "completed", 00:20:14.344 "digest": "sha384", 00:20:14.344 "dhgroup": "ffdhe2048" 00:20:14.344 } 00:20:14.344 } 00:20:14.344 ]' 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.344 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.605 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:14.605 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.176 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.436 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.437 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.437 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.697 00:20:15.697 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.697 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.697 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.957 { 00:20:15.957 "cntlid": 61, 00:20:15.957 "qid": 0, 00:20:15.957 "state": "enabled", 00:20:15.957 "thread": "nvmf_tgt_poll_group_000", 00:20:15.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:15.957 "listen_address": { 00:20:15.957 "trtype": "TCP", 00:20:15.957 "adrfam": "IPv4", 00:20:15.957 "traddr": "10.0.0.2", 00:20:15.957 "trsvcid": "4420" 00:20:15.957 }, 00:20:15.957 "peer_address": { 00:20:15.957 "trtype": "TCP", 00:20:15.957 "adrfam": "IPv4", 00:20:15.957 "traddr": "10.0.0.1", 00:20:15.957 "trsvcid": "39650" 00:20:15.957 }, 00:20:15.957 "auth": { 00:20:15.957 "state": "completed", 00:20:15.957 "digest": "sha384", 00:20:15.957 "dhgroup": "ffdhe2048" 00:20:15.957 } 00:20:15.957 } 00:20:15.957 ]' 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.957 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.217 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:16.217 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.788 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.057 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:17.057 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.057 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.057 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.058 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.317 00:20:17.318 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.318 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.318 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.577 { 00:20:17.577 "cntlid": 63, 00:20:17.577 "qid": 0, 00:20:17.577 "state": "enabled", 00:20:17.577 "thread": "nvmf_tgt_poll_group_000", 00:20:17.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:17.577 "listen_address": { 00:20:17.577 "trtype": "TCP", 00:20:17.577 "adrfam": "IPv4", 00:20:17.577 "traddr": "10.0.0.2", 00:20:17.577 "trsvcid": "4420" 00:20:17.577 }, 00:20:17.577 "peer_address": { 00:20:17.577 "trtype": "TCP", 00:20:17.577 "adrfam": "IPv4", 00:20:17.577 "traddr": "10.0.0.1", 00:20:17.577 "trsvcid": "39676" 00:20:17.577 }, 00:20:17.577 "auth": { 00:20:17.577 "state": "completed", 00:20:17.577 "digest": "sha384", 00:20:17.577 "dhgroup": "ffdhe2048" 00:20:17.577 } 00:20:17.577 } 00:20:17.577 ]' 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.577 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.838 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:17.838 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.408 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.669 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.929 00:20:18.929 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.929 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.929 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.189 { 00:20:19.189 "cntlid": 65, 00:20:19.189 "qid": 0, 00:20:19.189 "state": "enabled", 00:20:19.189 "thread": "nvmf_tgt_poll_group_000", 00:20:19.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:19.189 "listen_address": { 00:20:19.189 "trtype": "TCP", 00:20:19.189 "adrfam": "IPv4", 00:20:19.189 "traddr": "10.0.0.2", 00:20:19.189 "trsvcid": "4420" 00:20:19.189 }, 00:20:19.189 "peer_address": { 00:20:19.189 "trtype": "TCP", 00:20:19.189 "adrfam": "IPv4", 00:20:19.189 "traddr": "10.0.0.1", 00:20:19.189 "trsvcid": "39700" 00:20:19.189 }, 00:20:19.189 "auth": { 00:20:19.189 "state": "completed", 00:20:19.189 "digest": "sha384", 00:20:19.189 "dhgroup": "ffdhe3072" 00:20:19.189 } 00:20:19.189 } 00:20:19.189 ]' 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.189 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.189 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.449 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:19.449 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.019 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.278 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.538 00:20:20.538 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.538 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.538 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.798 { 00:20:20.798 "cntlid": 67, 00:20:20.798 "qid": 0, 00:20:20.798 "state": "enabled", 00:20:20.798 "thread": "nvmf_tgt_poll_group_000", 00:20:20.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:20.798 "listen_address": { 00:20:20.798 "trtype": "TCP", 00:20:20.798 "adrfam": "IPv4", 00:20:20.798 "traddr": "10.0.0.2", 00:20:20.798 "trsvcid": "4420" 00:20:20.798 }, 00:20:20.798 "peer_address": { 00:20:20.798 "trtype": "TCP", 00:20:20.798 "adrfam": "IPv4", 00:20:20.798 "traddr": "10.0.0.1", 00:20:20.798 "trsvcid": "33724" 00:20:20.798 }, 00:20:20.798 "auth": { 00:20:20.798 "state": "completed", 00:20:20.798 "digest": "sha384", 00:20:20.798 "dhgroup": "ffdhe3072" 00:20:20.798 } 00:20:20.798 } 00:20:20.798 ]' 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.798 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.059 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:21.059 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.639 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.899 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.176 00:20:22.176 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.176 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.176 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.176 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.176 { 00:20:22.176 "cntlid": 69, 00:20:22.176 "qid": 0, 00:20:22.176 "state": "enabled", 00:20:22.176 "thread": "nvmf_tgt_poll_group_000", 00:20:22.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:22.176 "listen_address": { 00:20:22.176 "trtype": "TCP", 00:20:22.176 "adrfam": "IPv4", 00:20:22.176 "traddr": "10.0.0.2", 00:20:22.176 "trsvcid": "4420" 00:20:22.176 }, 00:20:22.176 "peer_address": { 00:20:22.176 "trtype": "TCP", 00:20:22.176 "adrfam": "IPv4", 00:20:22.176 "traddr": "10.0.0.1", 00:20:22.176 "trsvcid": "33750" 00:20:22.176 }, 00:20:22.176 "auth": { 00:20:22.176 "state": "completed", 00:20:22.176 "digest": "sha384", 00:20:22.176 "dhgroup": "ffdhe3072" 00:20:22.176 } 00:20:22.176 } 00:20:22.176 ]' 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.436 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.696 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:22.696 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.264 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.525 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.785 00:20:23.785 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.785 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.785 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.785 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.786 { 00:20:23.786 "cntlid": 71, 00:20:23.786 "qid": 0, 00:20:23.786 "state": "enabled", 00:20:23.786 "thread": "nvmf_tgt_poll_group_000", 00:20:23.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:23.786 "listen_address": { 00:20:23.786 "trtype": "TCP", 00:20:23.786 "adrfam": "IPv4", 00:20:23.786 "traddr": "10.0.0.2", 00:20:23.786 "trsvcid": "4420" 00:20:23.786 }, 00:20:23.786 "peer_address": { 00:20:23.786 "trtype": "TCP", 00:20:23.786 "adrfam": "IPv4", 00:20:23.786 "traddr": "10.0.0.1", 00:20:23.786 "trsvcid": "33780" 00:20:23.786 }, 00:20:23.786 "auth": { 00:20:23.786 "state": "completed", 00:20:23.786 "digest": "sha384", 00:20:23.786 "dhgroup": "ffdhe3072" 00:20:23.786 } 00:20:23.786 } 00:20:23.786 ]' 00:20:23.786 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.045 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.304 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:24.305 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.873 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.132 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.392 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.392 { 00:20:25.392 "cntlid": 73, 00:20:25.392 "qid": 0, 00:20:25.392 "state": "enabled", 00:20:25.392 "thread": "nvmf_tgt_poll_group_000", 00:20:25.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:25.392 "listen_address": { 00:20:25.392 "trtype": "TCP", 00:20:25.392 "adrfam": "IPv4", 00:20:25.392 "traddr": "10.0.0.2", 00:20:25.392 "trsvcid": "4420" 00:20:25.392 }, 00:20:25.392 "peer_address": { 00:20:25.392 "trtype": "TCP", 00:20:25.392 "adrfam": "IPv4", 00:20:25.392 "traddr": "10.0.0.1", 00:20:25.392 "trsvcid": "33816" 00:20:25.392 }, 00:20:25.392 "auth": { 00:20:25.392 "state": "completed", 00:20:25.392 "digest": "sha384", 00:20:25.392 "dhgroup": "ffdhe4096" 00:20:25.392 } 00:20:25.392 } 00:20:25.392 ]' 00:20:25.392 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.651 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.911 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:25.911 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.478 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.738 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.999 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.999 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.260 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.260 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.260 { 00:20:27.260 "cntlid": 75, 00:20:27.260 "qid": 0, 00:20:27.260 "state": "enabled", 00:20:27.260 "thread": "nvmf_tgt_poll_group_000", 00:20:27.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:27.260 "listen_address": { 00:20:27.260 "trtype": "TCP", 00:20:27.260 "adrfam": "IPv4", 00:20:27.260 "traddr": "10.0.0.2", 00:20:27.260 "trsvcid": "4420" 00:20:27.260 }, 00:20:27.260 "peer_address": { 00:20:27.260 "trtype": "TCP", 00:20:27.260 "adrfam": "IPv4", 00:20:27.260 "traddr": "10.0.0.1", 00:20:27.260 "trsvcid": "33828" 00:20:27.260 }, 00:20:27.260 "auth": { 00:20:27.260 "state": "completed", 00:20:27.260 "digest": "sha384", 00:20:27.260 "dhgroup": "ffdhe4096" 00:20:27.260 } 00:20:27.260 } 00:20:27.260 ]' 00:20:27.260 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.260 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.520 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:27.520 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:28.091 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.091 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:28.091 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.091 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.091 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.091 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.091 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.091 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.351 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.611 00:20:28.611 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.611 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.611 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.872 { 00:20:28.872 "cntlid": 77, 00:20:28.872 "qid": 0, 00:20:28.872 "state": "enabled", 00:20:28.872 "thread": "nvmf_tgt_poll_group_000", 00:20:28.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:28.872 "listen_address": { 00:20:28.872 "trtype": "TCP", 00:20:28.872 "adrfam": "IPv4", 00:20:28.872 "traddr": "10.0.0.2", 00:20:28.872 "trsvcid": "4420" 00:20:28.872 }, 00:20:28.872 "peer_address": { 00:20:28.872 "trtype": "TCP", 00:20:28.872 "adrfam": "IPv4", 00:20:28.872 "traddr": "10.0.0.1", 00:20:28.872 "trsvcid": "33850" 00:20:28.872 }, 00:20:28.872 "auth": { 00:20:28.872 "state": "completed", 00:20:28.872 "digest": "sha384", 00:20:28.872 "dhgroup": "ffdhe4096" 00:20:28.872 } 00:20:28.872 } 00:20:28.872 ]' 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.872 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.134 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:29.134 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.726 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.986 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.246 00:20:30.246 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.246 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.246 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.507 { 00:20:30.507 "cntlid": 79, 00:20:30.507 "qid": 0, 00:20:30.507 "state": "enabled", 00:20:30.507 "thread": "nvmf_tgt_poll_group_000", 00:20:30.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:30.507 "listen_address": { 00:20:30.507 "trtype": "TCP", 00:20:30.507 "adrfam": "IPv4", 00:20:30.507 "traddr": "10.0.0.2", 00:20:30.507 "trsvcid": "4420" 00:20:30.507 }, 00:20:30.507 "peer_address": { 00:20:30.507 "trtype": "TCP", 00:20:30.507 "adrfam": "IPv4", 00:20:30.507 "traddr": "10.0.0.1", 00:20:30.507 "trsvcid": "37802" 00:20:30.507 }, 00:20:30.507 "auth": { 00:20:30.507 "state": "completed", 00:20:30.507 "digest": "sha384", 00:20:30.507 "dhgroup": "ffdhe4096" 00:20:30.507 } 00:20:30.507 } 00:20:30.507 ]' 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.507 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.767 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:30.767 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.337 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.596 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.857 00:20:31.857 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.857 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.857 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.117 { 00:20:32.117 "cntlid": 81, 00:20:32.117 "qid": 0, 00:20:32.117 "state": "enabled", 00:20:32.117 "thread": "nvmf_tgt_poll_group_000", 00:20:32.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:32.117 "listen_address": { 00:20:32.117 "trtype": "TCP", 00:20:32.117 "adrfam": "IPv4", 00:20:32.117 "traddr": "10.0.0.2", 00:20:32.117 "trsvcid": "4420" 00:20:32.117 }, 00:20:32.117 "peer_address": { 00:20:32.117 "trtype": "TCP", 00:20:32.117 "adrfam": "IPv4", 00:20:32.117 "traddr": "10.0.0.1", 00:20:32.117 "trsvcid": "37832" 00:20:32.117 }, 00:20:32.117 "auth": { 00:20:32.117 "state": "completed", 00:20:32.117 "digest": "sha384", 00:20:32.117 "dhgroup": "ffdhe6144" 00:20:32.117 } 00:20:32.117 } 00:20:32.117 ]' 00:20:32.117 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.117 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.117 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.117 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.117 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.377 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.377 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.377 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.377 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:32.377 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.316 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.317 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.583 00:20:33.583 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.583 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.583 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.842 { 00:20:33.842 "cntlid": 83, 00:20:33.842 "qid": 0, 00:20:33.842 "state": "enabled", 00:20:33.842 "thread": "nvmf_tgt_poll_group_000", 00:20:33.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:33.842 "listen_address": { 00:20:33.842 "trtype": "TCP", 00:20:33.842 "adrfam": "IPv4", 00:20:33.842 "traddr": "10.0.0.2", 00:20:33.842 "trsvcid": "4420" 00:20:33.842 }, 00:20:33.842 "peer_address": { 00:20:33.842 "trtype": "TCP", 00:20:33.842 "adrfam": "IPv4", 00:20:33.842 "traddr": "10.0.0.1", 00:20:33.842 "trsvcid": "37856" 00:20:33.842 }, 00:20:33.842 "auth": { 00:20:33.842 "state": "completed", 00:20:33.842 "digest": "sha384", 00:20:33.842 "dhgroup": "ffdhe6144" 00:20:33.842 } 00:20:33.842 } 00:20:33.842 ]' 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.842 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.101 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:34.101 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:34.677 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.677 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:34.677 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.677 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.936 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.196 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.456 { 00:20:35.456 "cntlid": 85, 00:20:35.456 "qid": 0, 00:20:35.456 "state": "enabled", 00:20:35.456 "thread": "nvmf_tgt_poll_group_000", 00:20:35.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:35.456 "listen_address": { 00:20:35.456 "trtype": "TCP", 00:20:35.456 "adrfam": "IPv4", 00:20:35.456 "traddr": "10.0.0.2", 00:20:35.456 "trsvcid": "4420" 00:20:35.456 }, 00:20:35.456 "peer_address": { 00:20:35.456 "trtype": "TCP", 00:20:35.456 "adrfam": "IPv4", 00:20:35.456 "traddr": "10.0.0.1", 00:20:35.456 "trsvcid": "37886" 00:20:35.456 }, 00:20:35.456 "auth": { 00:20:35.456 "state": "completed", 00:20:35.456 "digest": "sha384", 00:20:35.456 "dhgroup": "ffdhe6144" 00:20:35.456 } 00:20:35.456 } 00:20:35.456 ]' 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.456 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:35.717 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.658 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.918 00:20:37.179 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.179 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.179 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.179 { 00:20:37.179 "cntlid": 87, 00:20:37.179 "qid": 0, 00:20:37.179 "state": "enabled", 00:20:37.179 "thread": "nvmf_tgt_poll_group_000", 00:20:37.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:37.179 "listen_address": { 00:20:37.179 "trtype": "TCP", 00:20:37.179 "adrfam": "IPv4", 00:20:37.179 "traddr": "10.0.0.2", 00:20:37.179 "trsvcid": "4420" 00:20:37.179 }, 00:20:37.179 "peer_address": { 00:20:37.179 "trtype": "TCP", 00:20:37.179 "adrfam": "IPv4", 00:20:37.179 "traddr": "10.0.0.1", 00:20:37.179 "trsvcid": "37900" 00:20:37.179 }, 00:20:37.179 "auth": { 00:20:37.179 "state": "completed", 00:20:37.179 "digest": "sha384", 00:20:37.179 "dhgroup": "ffdhe6144" 00:20:37.179 } 00:20:37.179 } 00:20:37.179 ]' 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.179 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:37.439 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.380 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.952 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.952 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.212 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.212 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.212 { 00:20:39.212 "cntlid": 89, 00:20:39.212 "qid": 0, 00:20:39.212 "state": "enabled", 00:20:39.212 "thread": "nvmf_tgt_poll_group_000", 00:20:39.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:39.212 "listen_address": { 00:20:39.212 "trtype": "TCP", 00:20:39.212 "adrfam": "IPv4", 00:20:39.212 "traddr": "10.0.0.2", 00:20:39.212 "trsvcid": "4420" 00:20:39.212 }, 00:20:39.212 "peer_address": { 00:20:39.212 "trtype": "TCP", 00:20:39.212 "adrfam": "IPv4", 00:20:39.212 "traddr": "10.0.0.1", 00:20:39.212 "trsvcid": "37934" 00:20:39.212 }, 00:20:39.212 "auth": { 00:20:39.212 "state": "completed", 00:20:39.212 "digest": "sha384", 00:20:39.212 "dhgroup": "ffdhe8192" 00:20:39.212 } 00:20:39.212 } 00:20:39.212 ]' 00:20:39.212 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.212 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.212 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.212 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.212 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.212 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.212 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.212 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.472 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:39.472 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.042 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.302 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.302 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.302 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.302 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.302 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.303 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.871 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.871 { 00:20:40.871 "cntlid": 91, 00:20:40.871 "qid": 0, 00:20:40.871 "state": "enabled", 00:20:40.871 "thread": "nvmf_tgt_poll_group_000", 00:20:40.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:40.871 "listen_address": { 00:20:40.871 "trtype": "TCP", 00:20:40.871 "adrfam": "IPv4", 00:20:40.871 "traddr": "10.0.0.2", 00:20:40.871 "trsvcid": "4420" 00:20:40.871 }, 00:20:40.871 "peer_address": { 00:20:40.871 "trtype": "TCP", 00:20:40.871 "adrfam": "IPv4", 00:20:40.871 "traddr": "10.0.0.1", 00:20:40.871 "trsvcid": "35358" 00:20:40.871 }, 00:20:40.871 "auth": { 00:20:40.871 "state": "completed", 00:20:40.871 "digest": "sha384", 00:20:40.871 "dhgroup": "ffdhe8192" 00:20:40.871 } 00:20:40.871 } 00:20:40.871 ]' 00:20:40.871 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.131 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.411 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:41.411 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.980 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.240 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.240 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.240 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.240 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.240 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.500 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.760 { 00:20:42.760 "cntlid": 93, 00:20:42.760 "qid": 0, 00:20:42.760 "state": "enabled", 00:20:42.760 "thread": "nvmf_tgt_poll_group_000", 00:20:42.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:42.760 "listen_address": { 00:20:42.760 "trtype": "TCP", 00:20:42.760 "adrfam": "IPv4", 00:20:42.760 "traddr": "10.0.0.2", 00:20:42.760 "trsvcid": "4420" 00:20:42.760 }, 00:20:42.760 "peer_address": { 00:20:42.760 "trtype": "TCP", 00:20:42.760 "adrfam": "IPv4", 00:20:42.760 "traddr": "10.0.0.1", 00:20:42.760 "trsvcid": "35394" 00:20:42.760 }, 00:20:42.760 "auth": { 00:20:42.760 "state": "completed", 00:20:42.760 "digest": "sha384", 00:20:42.760 "dhgroup": "ffdhe8192" 00:20:42.760 } 00:20:42.760 } 00:20:42.760 ]' 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.019 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.019 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.019 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.019 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.019 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.279 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:43.279 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.849 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.109 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.369 00:20:44.369 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.369 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.369 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.629 { 00:20:44.629 "cntlid": 95, 00:20:44.629 "qid": 0, 00:20:44.629 "state": "enabled", 00:20:44.629 "thread": "nvmf_tgt_poll_group_000", 00:20:44.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:44.629 "listen_address": { 00:20:44.629 "trtype": "TCP", 00:20:44.629 "adrfam": "IPv4", 00:20:44.629 "traddr": "10.0.0.2", 00:20:44.629 "trsvcid": "4420" 00:20:44.629 }, 00:20:44.629 "peer_address": { 00:20:44.629 "trtype": "TCP", 00:20:44.629 "adrfam": "IPv4", 00:20:44.629 "traddr": "10.0.0.1", 00:20:44.629 "trsvcid": "35428" 00:20:44.629 }, 00:20:44.629 "auth": { 00:20:44.629 "state": "completed", 00:20:44.629 "digest": "sha384", 00:20:44.629 "dhgroup": "ffdhe8192" 00:20:44.629 } 00:20:44.629 } 00:20:44.629 ]' 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.629 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:44.893 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.833 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.094 00:20:46.094 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.094 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.094 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.355 { 00:20:46.355 "cntlid": 97, 00:20:46.355 "qid": 0, 00:20:46.355 "state": "enabled", 00:20:46.355 "thread": "nvmf_tgt_poll_group_000", 00:20:46.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:46.355 "listen_address": { 00:20:46.355 "trtype": "TCP", 00:20:46.355 "adrfam": "IPv4", 00:20:46.355 "traddr": "10.0.0.2", 00:20:46.355 "trsvcid": "4420" 00:20:46.355 }, 00:20:46.355 "peer_address": { 00:20:46.355 "trtype": "TCP", 00:20:46.355 "adrfam": "IPv4", 00:20:46.355 "traddr": "10.0.0.1", 00:20:46.355 "trsvcid": "35446" 00:20:46.355 }, 00:20:46.355 "auth": { 00:20:46.355 "state": "completed", 00:20:46.355 "digest": "sha512", 00:20:46.355 "dhgroup": "null" 00:20:46.355 } 00:20:46.355 } 00:20:46.355 ]' 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.355 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.616 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:46.616 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.186 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.446 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.707 00:20:47.707 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.707 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.707 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.967 { 00:20:47.967 "cntlid": 99, 00:20:47.967 "qid": 0, 00:20:47.967 "state": "enabled", 00:20:47.967 "thread": "nvmf_tgt_poll_group_000", 00:20:47.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:47.967 "listen_address": { 00:20:47.967 "trtype": "TCP", 00:20:47.967 "adrfam": "IPv4", 00:20:47.967 "traddr": "10.0.0.2", 00:20:47.967 "trsvcid": "4420" 00:20:47.967 }, 00:20:47.967 "peer_address": { 00:20:47.967 "trtype": "TCP", 00:20:47.967 "adrfam": "IPv4", 00:20:47.967 "traddr": "10.0.0.1", 00:20:47.967 "trsvcid": "35468" 00:20:47.967 }, 00:20:47.967 "auth": { 00:20:47.967 "state": "completed", 00:20:47.967 "digest": "sha512", 00:20:47.967 "dhgroup": "null" 00:20:47.967 } 00:20:47.967 } 00:20:47.967 ]' 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.967 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.227 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:48.227 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.796 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.055 05:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.314 00:20:49.314 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.314 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.315 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.574 { 00:20:49.574 "cntlid": 101, 00:20:49.574 "qid": 0, 00:20:49.574 "state": "enabled", 00:20:49.574 "thread": "nvmf_tgt_poll_group_000", 00:20:49.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:49.574 "listen_address": { 00:20:49.574 "trtype": "TCP", 00:20:49.574 "adrfam": "IPv4", 00:20:49.574 "traddr": "10.0.0.2", 00:20:49.574 "trsvcid": "4420" 00:20:49.574 }, 00:20:49.574 "peer_address": { 00:20:49.574 "trtype": "TCP", 00:20:49.574 "adrfam": "IPv4", 00:20:49.574 "traddr": "10.0.0.1", 00:20:49.574 "trsvcid": "40790" 00:20:49.574 }, 00:20:49.574 "auth": { 00:20:49.574 "state": "completed", 00:20:49.574 "digest": "sha512", 00:20:49.574 "dhgroup": "null" 00:20:49.574 } 00:20:49.574 } 00:20:49.574 ]' 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.574 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.833 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:49.833 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.401 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.660 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.919 00:20:50.919 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.919 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.919 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.178 { 00:20:51.178 "cntlid": 103, 00:20:51.178 "qid": 0, 00:20:51.178 "state": "enabled", 00:20:51.178 "thread": "nvmf_tgt_poll_group_000", 00:20:51.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:51.178 "listen_address": { 00:20:51.178 "trtype": "TCP", 00:20:51.178 "adrfam": "IPv4", 00:20:51.178 "traddr": "10.0.0.2", 00:20:51.178 "trsvcid": "4420" 00:20:51.178 }, 00:20:51.178 "peer_address": { 00:20:51.178 "trtype": "TCP", 00:20:51.178 "adrfam": "IPv4", 00:20:51.178 "traddr": "10.0.0.1", 00:20:51.178 "trsvcid": "40816" 00:20:51.178 }, 00:20:51.178 "auth": { 00:20:51.178 "state": "completed", 00:20:51.178 "digest": "sha512", 00:20:51.178 "dhgroup": "null" 00:20:51.178 } 00:20:51.178 } 00:20:51.178 ]' 00:20:51.178 05:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.178 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.437 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:51.437 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.004 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.263 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.523 00:20:52.523 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.523 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.523 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.783 { 00:20:52.783 "cntlid": 105, 00:20:52.783 "qid": 0, 00:20:52.783 "state": "enabled", 00:20:52.783 "thread": "nvmf_tgt_poll_group_000", 00:20:52.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:52.783 "listen_address": { 00:20:52.783 "trtype": "TCP", 00:20:52.783 "adrfam": "IPv4", 00:20:52.783 "traddr": "10.0.0.2", 00:20:52.783 "trsvcid": "4420" 00:20:52.783 }, 00:20:52.783 "peer_address": { 00:20:52.783 "trtype": "TCP", 00:20:52.783 "adrfam": "IPv4", 00:20:52.783 "traddr": "10.0.0.1", 00:20:52.783 "trsvcid": "40850" 00:20:52.783 }, 00:20:52.783 "auth": { 00:20:52.783 "state": "completed", 00:20:52.783 "digest": "sha512", 00:20:52.783 "dhgroup": "ffdhe2048" 00:20:52.783 } 00:20:52.783 } 00:20:52.783 ]' 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.783 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.044 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:53.044 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.614 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.874 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.875 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.134 00:20:54.134 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.134 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.134 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.395 { 00:20:54.395 "cntlid": 107, 00:20:54.395 "qid": 0, 00:20:54.395 "state": "enabled", 00:20:54.395 "thread": "nvmf_tgt_poll_group_000", 00:20:54.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:54.395 "listen_address": { 00:20:54.395 "trtype": "TCP", 00:20:54.395 "adrfam": "IPv4", 00:20:54.395 "traddr": "10.0.0.2", 00:20:54.395 "trsvcid": "4420" 00:20:54.395 }, 00:20:54.395 "peer_address": { 00:20:54.395 "trtype": "TCP", 00:20:54.395 "adrfam": "IPv4", 00:20:54.395 "traddr": "10.0.0.1", 00:20:54.395 "trsvcid": "40888" 00:20:54.395 }, 00:20:54.395 "auth": { 00:20:54.395 "state": "completed", 00:20:54.395 "digest": "sha512", 00:20:54.395 "dhgroup": "ffdhe2048" 00:20:54.395 } 00:20:54.395 } 00:20:54.395 ]' 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.395 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.656 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:54.656 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.228 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.495 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.756 00:20:55.756 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.756 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.756 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.016 { 00:20:56.016 "cntlid": 109, 00:20:56.016 "qid": 0, 00:20:56.016 "state": "enabled", 00:20:56.016 "thread": "nvmf_tgt_poll_group_000", 00:20:56.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:56.016 "listen_address": { 00:20:56.016 "trtype": "TCP", 00:20:56.016 "adrfam": "IPv4", 00:20:56.016 "traddr": "10.0.0.2", 00:20:56.016 "trsvcid": "4420" 00:20:56.016 }, 00:20:56.016 "peer_address": { 00:20:56.016 "trtype": "TCP", 00:20:56.016 "adrfam": "IPv4", 00:20:56.016 "traddr": "10.0.0.1", 00:20:56.016 "trsvcid": "40914" 00:20:56.016 }, 00:20:56.016 "auth": { 00:20:56.016 "state": "completed", 00:20:56.016 "digest": "sha512", 00:20:56.016 "dhgroup": "ffdhe2048" 00:20:56.016 } 00:20:56.016 } 00:20:56.016 ]' 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.016 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.277 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:56.277 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.849 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.109 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.369 00:20:57.369 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.369 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.369 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.629 { 00:20:57.629 "cntlid": 111, 00:20:57.629 "qid": 0, 00:20:57.629 "state": "enabled", 00:20:57.629 "thread": "nvmf_tgt_poll_group_000", 00:20:57.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:57.629 "listen_address": { 00:20:57.629 "trtype": "TCP", 00:20:57.629 "adrfam": "IPv4", 00:20:57.629 "traddr": "10.0.0.2", 00:20:57.629 "trsvcid": "4420" 00:20:57.629 }, 00:20:57.629 "peer_address": { 00:20:57.629 "trtype": "TCP", 00:20:57.629 "adrfam": "IPv4", 00:20:57.629 "traddr": "10.0.0.1", 00:20:57.629 "trsvcid": "40938" 00:20:57.629 }, 00:20:57.629 "auth": { 00:20:57.629 "state": "completed", 00:20:57.629 "digest": "sha512", 00:20:57.629 "dhgroup": "ffdhe2048" 00:20:57.629 } 00:20:57.629 } 00:20:57.629 ]' 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.629 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.890 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:57.890 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.459 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.720 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.979 00:20:58.979 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.979 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.979 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.240 { 00:20:59.240 "cntlid": 113, 00:20:59.240 "qid": 0, 00:20:59.240 "state": "enabled", 00:20:59.240 "thread": "nvmf_tgt_poll_group_000", 00:20:59.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:59.240 "listen_address": { 00:20:59.240 "trtype": "TCP", 00:20:59.240 "adrfam": "IPv4", 00:20:59.240 "traddr": "10.0.0.2", 00:20:59.240 "trsvcid": "4420" 00:20:59.240 }, 00:20:59.240 "peer_address": { 00:20:59.240 "trtype": "TCP", 00:20:59.240 "adrfam": "IPv4", 00:20:59.240 "traddr": "10.0.0.1", 00:20:59.240 "trsvcid": "40964" 00:20:59.240 }, 00:20:59.240 "auth": { 00:20:59.240 "state": "completed", 00:20:59.240 "digest": "sha512", 00:20:59.240 "dhgroup": "ffdhe3072" 00:20:59.240 } 00:20:59.240 } 00:20:59.240 ]' 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.240 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.534 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:20:59.534 05:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.166 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.426 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.685 00:21:00.685 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.685 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.685 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.685 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.946 { 00:21:00.946 "cntlid": 115, 00:21:00.946 "qid": 0, 00:21:00.946 "state": "enabled", 00:21:00.946 "thread": "nvmf_tgt_poll_group_000", 00:21:00.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:00.946 "listen_address": { 00:21:00.946 "trtype": "TCP", 00:21:00.946 "adrfam": "IPv4", 00:21:00.946 "traddr": "10.0.0.2", 00:21:00.946 "trsvcid": "4420" 00:21:00.946 }, 00:21:00.946 "peer_address": { 00:21:00.946 "trtype": "TCP", 00:21:00.946 "adrfam": "IPv4", 00:21:00.946 "traddr": "10.0.0.1", 00:21:00.946 "trsvcid": "57546" 00:21:00.946 }, 00:21:00.946 "auth": { 00:21:00.946 "state": "completed", 00:21:00.946 "digest": "sha512", 00:21:00.946 "dhgroup": "ffdhe3072" 00:21:00.946 } 00:21:00.946 } 00:21:00.946 ]' 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.946 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.206 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:01.206 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.775 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.034 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.035 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.035 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.035 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.035 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.294 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.294 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.294 { 00:21:02.294 "cntlid": 117, 00:21:02.294 "qid": 0, 00:21:02.294 "state": "enabled", 00:21:02.294 "thread": "nvmf_tgt_poll_group_000", 00:21:02.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:02.294 "listen_address": { 00:21:02.294 "trtype": "TCP", 00:21:02.294 "adrfam": "IPv4", 00:21:02.294 "traddr": "10.0.0.2", 00:21:02.294 "trsvcid": "4420" 00:21:02.294 }, 00:21:02.294 "peer_address": { 00:21:02.294 "trtype": "TCP", 00:21:02.294 "adrfam": "IPv4", 00:21:02.294 "traddr": "10.0.0.1", 00:21:02.294 "trsvcid": "57578" 00:21:02.294 }, 00:21:02.294 "auth": { 00:21:02.294 "state": "completed", 00:21:02.294 "digest": "sha512", 00:21:02.294 "dhgroup": "ffdhe3072" 00:21:02.294 } 00:21:02.294 } 00:21:02.294 ]' 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.554 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.813 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:02.813 05:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.384 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.647 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.907 00:21:03.907 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.907 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.907 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.907 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.908 { 00:21:03.908 "cntlid": 119, 00:21:03.908 "qid": 0, 00:21:03.908 "state": "enabled", 00:21:03.908 "thread": "nvmf_tgt_poll_group_000", 00:21:03.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:03.908 "listen_address": { 00:21:03.908 "trtype": "TCP", 00:21:03.908 "adrfam": "IPv4", 00:21:03.908 "traddr": "10.0.0.2", 00:21:03.908 "trsvcid": "4420" 00:21:03.908 }, 00:21:03.908 "peer_address": { 00:21:03.908 "trtype": "TCP", 00:21:03.908 "adrfam": "IPv4", 00:21:03.908 "traddr": "10.0.0.1", 00:21:03.908 "trsvcid": "57608" 00:21:03.908 }, 00:21:03.908 "auth": { 00:21:03.908 "state": "completed", 00:21:03.908 "digest": "sha512", 00:21:03.908 "dhgroup": "ffdhe3072" 00:21:03.908 } 00:21:03.908 } 00:21:03.908 ]' 00:21:03.908 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.167 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.167 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.167 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.167 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.167 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.167 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.167 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.427 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:04.427 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.997 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.257 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.517 00:21:05.517 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.517 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.517 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.776 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.776 { 00:21:05.776 "cntlid": 121, 00:21:05.776 "qid": 0, 00:21:05.776 "state": "enabled", 00:21:05.776 "thread": "nvmf_tgt_poll_group_000", 00:21:05.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:05.776 "listen_address": { 00:21:05.776 "trtype": "TCP", 00:21:05.776 "adrfam": "IPv4", 00:21:05.776 "traddr": "10.0.0.2", 00:21:05.776 "trsvcid": "4420" 00:21:05.776 }, 00:21:05.776 "peer_address": { 00:21:05.776 "trtype": "TCP", 00:21:05.776 "adrfam": "IPv4", 00:21:05.776 "traddr": "10.0.0.1", 00:21:05.776 "trsvcid": "57630" 00:21:05.776 }, 00:21:05.776 "auth": { 00:21:05.776 "state": "completed", 00:21:05.777 "digest": "sha512", 00:21:05.777 "dhgroup": "ffdhe4096" 00:21:05.777 } 00:21:05.777 } 00:21:05.777 ]' 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.777 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.037 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:06.037 05:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.607 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.866 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.125 00:21:07.125 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.125 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.126 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.384 { 00:21:07.384 "cntlid": 123, 00:21:07.384 "qid": 0, 00:21:07.384 "state": "enabled", 00:21:07.384 "thread": "nvmf_tgt_poll_group_000", 00:21:07.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:07.384 "listen_address": { 00:21:07.384 "trtype": "TCP", 00:21:07.384 "adrfam": "IPv4", 00:21:07.384 "traddr": "10.0.0.2", 00:21:07.384 "trsvcid": "4420" 00:21:07.384 }, 00:21:07.384 "peer_address": { 00:21:07.384 "trtype": "TCP", 00:21:07.384 "adrfam": "IPv4", 00:21:07.384 "traddr": "10.0.0.1", 00:21:07.384 "trsvcid": "57664" 00:21:07.384 }, 00:21:07.384 "auth": { 00:21:07.384 "state": "completed", 00:21:07.384 "digest": "sha512", 00:21:07.384 "dhgroup": "ffdhe4096" 00:21:07.384 } 00:21:07.384 } 00:21:07.384 ]' 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.384 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.644 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:07.644 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:08.212 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.212 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:08.212 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.212 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.471 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.746 00:21:08.746 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.746 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.746 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.006 { 00:21:09.006 "cntlid": 125, 00:21:09.006 "qid": 0, 00:21:09.006 "state": "enabled", 00:21:09.006 "thread": "nvmf_tgt_poll_group_000", 00:21:09.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:09.006 "listen_address": { 00:21:09.006 "trtype": "TCP", 00:21:09.006 "adrfam": "IPv4", 00:21:09.006 "traddr": "10.0.0.2", 00:21:09.006 "trsvcid": "4420" 00:21:09.006 }, 00:21:09.006 "peer_address": { 00:21:09.006 "trtype": "TCP", 00:21:09.006 "adrfam": "IPv4", 00:21:09.006 "traddr": "10.0.0.1", 00:21:09.006 "trsvcid": "57688" 00:21:09.006 }, 00:21:09.006 "auth": { 00:21:09.006 "state": "completed", 00:21:09.006 "digest": "sha512", 00:21:09.006 "dhgroup": "ffdhe4096" 00:21:09.006 } 00:21:09.006 } 00:21:09.006 ]' 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.006 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.265 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.265 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.265 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.265 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:09.265 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.205 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.205 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.465 00:21:10.465 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.465 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.465 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.726 { 00:21:10.726 "cntlid": 127, 00:21:10.726 "qid": 0, 00:21:10.726 "state": "enabled", 00:21:10.726 "thread": "nvmf_tgt_poll_group_000", 00:21:10.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:10.726 "listen_address": { 00:21:10.726 "trtype": "TCP", 00:21:10.726 "adrfam": "IPv4", 00:21:10.726 "traddr": "10.0.0.2", 00:21:10.726 "trsvcid": "4420" 00:21:10.726 }, 00:21:10.726 "peer_address": { 00:21:10.726 "trtype": "TCP", 00:21:10.726 "adrfam": "IPv4", 00:21:10.726 "traddr": "10.0.0.1", 00:21:10.726 "trsvcid": "53942" 00:21:10.726 }, 00:21:10.726 "auth": { 00:21:10.726 "state": "completed", 00:21:10.726 "digest": "sha512", 00:21:10.726 "dhgroup": "ffdhe4096" 00:21:10.726 } 00:21:10.726 } 00:21:10.726 ]' 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.726 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.986 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:10.986 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.562 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.823 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.084 00:21:12.084 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.084 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.084 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.344 { 00:21:12.344 "cntlid": 129, 00:21:12.344 "qid": 0, 00:21:12.344 "state": "enabled", 00:21:12.344 "thread": "nvmf_tgt_poll_group_000", 00:21:12.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:12.344 "listen_address": { 00:21:12.344 "trtype": "TCP", 00:21:12.344 "adrfam": "IPv4", 00:21:12.344 "traddr": "10.0.0.2", 00:21:12.344 "trsvcid": "4420" 00:21:12.344 }, 00:21:12.344 "peer_address": { 00:21:12.344 "trtype": "TCP", 00:21:12.344 "adrfam": "IPv4", 00:21:12.344 "traddr": "10.0.0.1", 00:21:12.344 "trsvcid": "53962" 00:21:12.344 }, 00:21:12.344 "auth": { 00:21:12.344 "state": "completed", 00:21:12.344 "digest": "sha512", 00:21:12.344 "dhgroup": "ffdhe6144" 00:21:12.344 } 00:21:12.344 } 00:21:12.344 ]' 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.344 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.604 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.604 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.604 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.604 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:12.604 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:13.174 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.434 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.695 00:21:13.973 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.973 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.973 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.974 { 00:21:13.974 "cntlid": 131, 00:21:13.974 "qid": 0, 00:21:13.974 "state": "enabled", 00:21:13.974 "thread": "nvmf_tgt_poll_group_000", 00:21:13.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:13.974 "listen_address": { 00:21:13.974 "trtype": "TCP", 00:21:13.974 "adrfam": "IPv4", 00:21:13.974 "traddr": "10.0.0.2", 00:21:13.974 "trsvcid": "4420" 00:21:13.974 }, 00:21:13.974 "peer_address": { 00:21:13.974 "trtype": "TCP", 00:21:13.974 "adrfam": "IPv4", 00:21:13.974 "traddr": "10.0.0.1", 00:21:13.974 "trsvcid": "53992" 00:21:13.974 }, 00:21:13.974 "auth": { 00:21:13.974 "state": "completed", 00:21:13.974 "digest": "sha512", 00:21:13.974 "dhgroup": "ffdhe6144" 00:21:13.974 } 00:21:13.974 } 00:21:13.974 ]' 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.974 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.234 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.234 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.234 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.234 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.234 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.234 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:14.234 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.175 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.435 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.695 00:21:15.695 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.695 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.695 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.956 { 00:21:15.956 "cntlid": 133, 00:21:15.956 "qid": 0, 00:21:15.956 "state": "enabled", 00:21:15.956 "thread": "nvmf_tgt_poll_group_000", 00:21:15.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:15.956 "listen_address": { 00:21:15.956 "trtype": "TCP", 00:21:15.956 "adrfam": "IPv4", 00:21:15.956 "traddr": "10.0.0.2", 00:21:15.956 "trsvcid": "4420" 00:21:15.956 }, 00:21:15.956 "peer_address": { 00:21:15.956 "trtype": "TCP", 00:21:15.956 "adrfam": "IPv4", 00:21:15.956 "traddr": "10.0.0.1", 00:21:15.956 "trsvcid": "54020" 00:21:15.956 }, 00:21:15.956 "auth": { 00:21:15.956 "state": "completed", 00:21:15.956 "digest": "sha512", 00:21:15.956 "dhgroup": "ffdhe6144" 00:21:15.956 } 00:21:15.956 } 00:21:15.956 ]' 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.956 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.216 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:16.216 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:16.785 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.786 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.046 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:17.046 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.046 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.046 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.047 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.321 00:21:17.321 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.321 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.321 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.581 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.581 { 00:21:17.581 "cntlid": 135, 00:21:17.581 "qid": 0, 00:21:17.581 "state": "enabled", 00:21:17.581 "thread": "nvmf_tgt_poll_group_000", 00:21:17.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:17.581 "listen_address": { 00:21:17.581 "trtype": "TCP", 00:21:17.581 "adrfam": "IPv4", 00:21:17.581 "traddr": "10.0.0.2", 00:21:17.581 "trsvcid": "4420" 00:21:17.581 }, 00:21:17.581 "peer_address": { 00:21:17.581 "trtype": "TCP", 00:21:17.581 "adrfam": "IPv4", 00:21:17.581 "traddr": "10.0.0.1", 00:21:17.581 "trsvcid": "54052" 00:21:17.581 }, 00:21:17.581 "auth": { 00:21:17.581 "state": "completed", 00:21:17.581 "digest": "sha512", 00:21:17.582 "dhgroup": "ffdhe6144" 00:21:17.582 } 00:21:17.582 } 00:21:17.582 ]' 00:21:17.582 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.582 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.582 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.582 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.582 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.842 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.842 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.842 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.842 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:17.842 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:18.411 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.670 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:18.670 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.671 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.240 00:21:19.240 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.240 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.240 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.500 { 00:21:19.500 "cntlid": 137, 00:21:19.500 "qid": 0, 00:21:19.500 "state": "enabled", 00:21:19.500 "thread": "nvmf_tgt_poll_group_000", 00:21:19.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:19.500 "listen_address": { 00:21:19.500 "trtype": "TCP", 00:21:19.500 "adrfam": "IPv4", 00:21:19.500 "traddr": "10.0.0.2", 00:21:19.500 "trsvcid": "4420" 00:21:19.500 }, 00:21:19.500 "peer_address": { 00:21:19.500 "trtype": "TCP", 00:21:19.500 "adrfam": "IPv4", 00:21:19.500 "traddr": "10.0.0.1", 00:21:19.500 "trsvcid": "54082" 00:21:19.500 }, 00:21:19.500 "auth": { 00:21:19.500 "state": "completed", 00:21:19.500 "digest": "sha512", 00:21:19.500 "dhgroup": "ffdhe8192" 00:21:19.500 } 00:21:19.500 } 00:21:19.500 ]' 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.500 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.760 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:19.760 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.330 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.591 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.162 00:21:21.162 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.162 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.162 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.162 { 00:21:21.162 "cntlid": 139, 00:21:21.162 "qid": 0, 00:21:21.162 "state": "enabled", 00:21:21.162 "thread": "nvmf_tgt_poll_group_000", 00:21:21.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:21.162 "listen_address": { 00:21:21.162 "trtype": "TCP", 00:21:21.162 "adrfam": "IPv4", 00:21:21.162 "traddr": "10.0.0.2", 00:21:21.162 "trsvcid": "4420" 00:21:21.162 }, 00:21:21.162 "peer_address": { 00:21:21.162 "trtype": "TCP", 00:21:21.162 "adrfam": "IPv4", 00:21:21.162 "traddr": "10.0.0.1", 00:21:21.162 "trsvcid": "48210" 00:21:21.162 }, 00:21:21.162 "auth": { 00:21:21.162 "state": "completed", 00:21:21.162 "digest": "sha512", 00:21:21.162 "dhgroup": "ffdhe8192" 00:21:21.162 } 00:21:21.162 } 00:21:21.162 ]' 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.162 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.422 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.422 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.422 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.422 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.422 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.682 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:21.682 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: --dhchap-ctrl-secret DHHC-1:02:MjlkMThmMWI3ZDAwZmY1NzBmZDIwZGQ4MmEzZjdmNDRiOWI1ZTJlYzczYmU3NGJklAgCYQ==: 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.252 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.524 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.095 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.095 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.095 { 00:21:23.095 "cntlid": 141, 00:21:23.095 "qid": 0, 00:21:23.095 "state": "enabled", 00:21:23.095 "thread": "nvmf_tgt_poll_group_000", 00:21:23.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:23.095 "listen_address": { 00:21:23.095 "trtype": "TCP", 00:21:23.095 "adrfam": "IPv4", 00:21:23.095 "traddr": "10.0.0.2", 00:21:23.095 "trsvcid": "4420" 00:21:23.095 }, 00:21:23.095 "peer_address": { 00:21:23.095 "trtype": "TCP", 00:21:23.095 "adrfam": "IPv4", 00:21:23.095 "traddr": "10.0.0.1", 00:21:23.095 "trsvcid": "48236" 00:21:23.095 }, 00:21:23.095 "auth": { 00:21:23.095 "state": "completed", 00:21:23.096 "digest": "sha512", 00:21:23.096 "dhgroup": "ffdhe8192" 00:21:23.096 } 00:21:23.096 } 00:21:23.096 ]' 00:21:23.096 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.096 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.096 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.096 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.355 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.355 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.356 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.356 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.356 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:23.356 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:01:YTEyMWYyNzUzYjc4MmE1ZWVlNmFmOTcxNWNmZWYwYTPWJyA2: 00:21:24.295 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:24.296 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.296 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.867 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.867 { 00:21:24.867 "cntlid": 143, 00:21:24.867 "qid": 0, 00:21:24.867 "state": "enabled", 00:21:24.867 "thread": "nvmf_tgt_poll_group_000", 00:21:24.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:24.867 "listen_address": { 00:21:24.867 "trtype": "TCP", 00:21:24.867 "adrfam": "IPv4", 00:21:24.867 "traddr": "10.0.0.2", 00:21:24.867 "trsvcid": "4420" 00:21:24.867 }, 00:21:24.867 "peer_address": { 00:21:24.867 "trtype": "TCP", 00:21:24.867 "adrfam": "IPv4", 00:21:24.867 "traddr": "10.0.0.1", 00:21:24.867 "trsvcid": "48266" 00:21:24.867 }, 00:21:24.867 "auth": { 00:21:24.867 "state": "completed", 00:21:24.867 "digest": "sha512", 00:21:24.867 "dhgroup": "ffdhe8192" 00:21:24.867 } 00:21:24.867 } 00:21:24.867 ]' 00:21:24.867 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.127 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.386 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:25.386 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.955 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.215 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.784 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.784 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.784 { 00:21:26.784 "cntlid": 145, 00:21:26.784 "qid": 0, 00:21:26.784 "state": "enabled", 00:21:26.784 "thread": "nvmf_tgt_poll_group_000", 00:21:26.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:26.784 "listen_address": { 00:21:26.784 "trtype": "TCP", 00:21:26.784 "adrfam": "IPv4", 00:21:26.784 "traddr": "10.0.0.2", 00:21:26.784 "trsvcid": "4420" 00:21:26.784 }, 00:21:26.784 "peer_address": { 00:21:26.784 "trtype": "TCP", 00:21:26.784 "adrfam": "IPv4", 00:21:26.784 "traddr": "10.0.0.1", 00:21:26.784 "trsvcid": "48304" 00:21:26.784 }, 00:21:26.784 "auth": { 00:21:26.784 "state": "completed", 00:21:26.784 "digest": "sha512", 00:21:26.784 "dhgroup": "ffdhe8192" 00:21:26.784 } 00:21:26.784 } 00:21:26.784 ]' 00:21:26.785 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.785 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.785 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.044 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.044 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.044 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.044 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.044 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.303 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:27.303 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZjQ1NDA2YmZhODBhODMxNDYwMmVhYTZiYTQ0NmM1MTBjNTQ4NzEwMzA2MDNmMDAx+GFkig==: --dhchap-ctrl-secret DHHC-1:03:Yjg0YTQ5N2JlYjliYTZjNTI4Y2U3MTFlMzg0MDg3MzkyNjViMmM5NDFiZjVmNTkzZTFkODk4YTAzZTc5NjFlYW4tr9I=: 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:27.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:28.440 request: 00:21:28.440 { 00:21:28.440 "name": "nvme0", 00:21:28.440 "trtype": "tcp", 00:21:28.440 "traddr": "10.0.0.2", 00:21:28.440 "adrfam": "ipv4", 00:21:28.440 "trsvcid": "4420", 00:21:28.440 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:28.440 "prchk_reftag": false, 00:21:28.440 "prchk_guard": false, 00:21:28.440 "hdgst": false, 00:21:28.440 "ddgst": false, 00:21:28.440 "dhchap_key": "key2", 00:21:28.440 "allow_unrecognized_csi": false, 00:21:28.440 "method": "bdev_nvme_attach_controller", 00:21:28.440 "req_id": 1 00:21:28.440 } 00:21:28.440 Got JSON-RPC error response 00:21:28.440 response: 00:21:28.440 { 00:21:28.440 "code": -5, 00:21:28.440 "message": "Input/output error" 00:21:28.440 } 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.440 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.702 request: 00:21:28.702 { 00:21:28.702 "name": "nvme0", 00:21:28.702 "trtype": "tcp", 00:21:28.702 "traddr": "10.0.0.2", 00:21:28.702 "adrfam": "ipv4", 00:21:28.702 "trsvcid": "4420", 00:21:28.702 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:28.702 "prchk_reftag": false, 00:21:28.702 "prchk_guard": false, 00:21:28.702 "hdgst": false, 00:21:28.702 "ddgst": false, 00:21:28.702 "dhchap_key": "key1", 00:21:28.702 "dhchap_ctrlr_key": "ckey2", 00:21:28.702 "allow_unrecognized_csi": false, 00:21:28.702 "method": "bdev_nvme_attach_controller", 00:21:28.702 "req_id": 1 00:21:28.702 } 00:21:28.702 Got JSON-RPC error response 00:21:28.702 response: 00:21:28.702 { 00:21:28.702 "code": -5, 00:21:28.702 "message": "Input/output error" 00:21:28.702 } 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.702 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.273 request: 00:21:29.273 { 00:21:29.273 "name": "nvme0", 00:21:29.273 "trtype": "tcp", 00:21:29.273 "traddr": "10.0.0.2", 00:21:29.273 "adrfam": "ipv4", 00:21:29.273 "trsvcid": "4420", 00:21:29.273 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:29.273 "prchk_reftag": false, 00:21:29.273 "prchk_guard": false, 00:21:29.273 "hdgst": false, 00:21:29.273 "ddgst": false, 00:21:29.273 "dhchap_key": "key1", 00:21:29.273 "dhchap_ctrlr_key": "ckey1", 00:21:29.273 "allow_unrecognized_csi": false, 00:21:29.273 "method": "bdev_nvme_attach_controller", 00:21:29.273 "req_id": 1 00:21:29.273 } 00:21:29.273 Got JSON-RPC error response 00:21:29.273 response: 00:21:29.273 { 00:21:29.273 "code": -5, 00:21:29.273 "message": "Input/output error" 00:21:29.273 } 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1541844 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1541844 ']' 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1541844 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541844 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541844' 00:21:29.273 killing process with pid 1541844 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1541844 00:21:29.273 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1541844 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1567663 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1567663 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1567663 ']' 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.845 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1567663 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1567663 ']' 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.784 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.044 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.044 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:31.044 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:31.044 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.044 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 null0 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a94 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.x6q ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x6q 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.91S 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.h3q ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h3q 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FcT 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.312 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.abN ]] 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.abN 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H6m 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.313 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.254 nvme0n1 00:21:32.254 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.254 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.254 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.254 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.254 { 00:21:32.254 "cntlid": 1, 00:21:32.254 "qid": 0, 00:21:32.254 "state": "enabled", 00:21:32.254 "thread": "nvmf_tgt_poll_group_000", 00:21:32.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:32.254 "listen_address": { 00:21:32.254 "trtype": "TCP", 00:21:32.254 "adrfam": "IPv4", 00:21:32.254 "traddr": "10.0.0.2", 00:21:32.254 "trsvcid": "4420" 00:21:32.254 }, 00:21:32.254 "peer_address": { 00:21:32.254 "trtype": "TCP", 00:21:32.254 "adrfam": "IPv4", 00:21:32.254 "traddr": "10.0.0.1", 00:21:32.254 "trsvcid": "33258" 00:21:32.254 }, 00:21:32.254 "auth": { 00:21:32.254 "state": "completed", 00:21:32.255 "digest": "sha512", 00:21:32.255 "dhgroup": "ffdhe8192" 00:21:32.255 } 00:21:32.255 } 00:21:32.255 ]' 00:21:32.255 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.255 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.255 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.255 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.255 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.514 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.514 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.514 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.514 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:32.514 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.452 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.712 request: 00:21:33.712 { 00:21:33.712 "name": "nvme0", 00:21:33.712 "trtype": "tcp", 00:21:33.712 "traddr": "10.0.0.2", 00:21:33.712 "adrfam": "ipv4", 00:21:33.712 "trsvcid": "4420", 00:21:33.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:33.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:33.712 "prchk_reftag": false, 00:21:33.712 "prchk_guard": false, 00:21:33.712 "hdgst": false, 00:21:33.712 "ddgst": false, 00:21:33.712 "dhchap_key": "key3", 00:21:33.712 "allow_unrecognized_csi": false, 00:21:33.712 "method": "bdev_nvme_attach_controller", 00:21:33.712 "req_id": 1 00:21:33.712 } 00:21:33.712 Got JSON-RPC error response 00:21:33.712 response: 00:21:33.712 { 00:21:33.712 "code": -5, 00:21:33.712 "message": "Input/output error" 00:21:33.712 } 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.712 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.972 request: 00:21:33.972 { 00:21:33.972 "name": "nvme0", 00:21:33.972 "trtype": "tcp", 00:21:33.972 "traddr": "10.0.0.2", 00:21:33.972 "adrfam": "ipv4", 00:21:33.972 "trsvcid": "4420", 00:21:33.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:33.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:33.972 "prchk_reftag": false, 00:21:33.972 "prchk_guard": false, 00:21:33.972 "hdgst": false, 00:21:33.972 "ddgst": false, 00:21:33.972 "dhchap_key": "key3", 00:21:33.972 "allow_unrecognized_csi": false, 00:21:33.972 "method": "bdev_nvme_attach_controller", 00:21:33.972 "req_id": 1 00:21:33.972 } 00:21:33.972 Got JSON-RPC error response 00:21:33.972 response: 00:21:33.972 { 00:21:33.972 "code": -5, 00:21:33.972 "message": "Input/output error" 00:21:33.972 } 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:33.972 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:33.973 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:34.233 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.234 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.234 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.234 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.493 request: 00:21:34.494 { 00:21:34.494 "name": "nvme0", 00:21:34.494 "trtype": "tcp", 00:21:34.494 "traddr": "10.0.0.2", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "4420", 00:21:34.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:34.494 "prchk_reftag": false, 00:21:34.494 "prchk_guard": false, 00:21:34.494 "hdgst": false, 00:21:34.494 "ddgst": false, 00:21:34.494 "dhchap_key": "key0", 00:21:34.494 "dhchap_ctrlr_key": "key1", 00:21:34.494 "allow_unrecognized_csi": false, 00:21:34.494 "method": "bdev_nvme_attach_controller", 00:21:34.494 "req_id": 1 00:21:34.494 } 00:21:34.494 Got JSON-RPC error response 00:21:34.494 response: 00:21:34.494 { 00:21:34.494 "code": -5, 00:21:34.494 "message": "Input/output error" 00:21:34.494 } 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:34.494 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:34.754 nvme0n1 00:21:34.754 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:34.754 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:34.754 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:35.015 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:35.953 nvme0n1 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:35.953 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.213 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.213 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:36.214 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: --dhchap-ctrl-secret DHHC-1:03:MjI1MzU2MjMyZDE1YjU5ODVmNzlhYjYzZGNlN2U5MDkwMWM1YmE3NDE2ZDhjNmU4ODViYTdmNWFjNGFmMTQyZaqiRks=: 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.783 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:37.044 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:37.614 request: 00:21:37.614 { 00:21:37.614 "name": "nvme0", 00:21:37.614 "trtype": "tcp", 00:21:37.614 "traddr": "10.0.0.2", 00:21:37.614 "adrfam": "ipv4", 00:21:37.614 "trsvcid": "4420", 00:21:37.614 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:37.614 "prchk_reftag": false, 00:21:37.614 "prchk_guard": false, 00:21:37.614 "hdgst": false, 00:21:37.614 "ddgst": false, 00:21:37.614 "dhchap_key": "key1", 00:21:37.614 "allow_unrecognized_csi": false, 00:21:37.614 "method": "bdev_nvme_attach_controller", 00:21:37.614 "req_id": 1 00:21:37.614 } 00:21:37.614 Got JSON-RPC error response 00:21:37.614 response: 00:21:37.614 { 00:21:37.614 "code": -5, 00:21:37.614 "message": "Input/output error" 00:21:37.614 } 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.614 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.183 nvme0n1 00:21:38.183 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:38.183 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:38.183 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.444 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.444 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.444 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:38.705 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:38.966 nvme0n1 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.966 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: '' 2s 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: ]] 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODY0YjU1MDNkODQ3NjlkMWEwZDllNWQxNmFlOTQ4Nzhlh3Kn: 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:39.226 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:41.137 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:41.404 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:41.404 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:41.404 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.404 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.404 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: 2s 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: ]] 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTdiYjAwOTIyMGIxMWE1Y2Y1ZjA5NDViOTQ1OGU5ZTVmYmFhM2M0MzBkZGQyNmI4KJMq4A==: 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:41.405 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:43.319 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:43.320 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:44.259 nvme0n1 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.259 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.519 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:44.519 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:44.519 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:44.779 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:45.038 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:45.038 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:45.038 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.038 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.607 request: 00:21:45.607 { 00:21:45.607 "name": "nvme0", 00:21:45.607 "dhchap_key": "key1", 00:21:45.607 "dhchap_ctrlr_key": "key3", 00:21:45.607 "method": "bdev_nvme_set_keys", 00:21:45.607 "req_id": 1 00:21:45.607 } 00:21:45.607 Got JSON-RPC error response 00:21:45.607 response: 00:21:45.607 { 00:21:45.607 "code": -13, 00:21:45.607 "message": "Permission denied" 00:21:45.607 } 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:45.607 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.866 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:45.866 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:46.804 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:46.804 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:46.804 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:47.064 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:47.635 nvme0n1 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:47.635 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:48.206 request: 00:21:48.206 { 00:21:48.206 "name": "nvme0", 00:21:48.206 "dhchap_key": "key2", 00:21:48.206 "dhchap_ctrlr_key": "key0", 00:21:48.206 "method": "bdev_nvme_set_keys", 00:21:48.206 "req_id": 1 00:21:48.206 } 00:21:48.206 Got JSON-RPC error response 00:21:48.206 response: 00:21:48.206 { 00:21:48.206 "code": -13, 00:21:48.206 "message": "Permission denied" 00:21:48.206 } 00:21:48.206 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:48.206 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.206 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.207 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.207 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:48.207 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:48.207 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.467 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:48.467 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:49.407 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:49.407 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:49.407 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1542112 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1542112 ']' 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1542112 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1542112 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1542112' 00:21:49.667 killing process with pid 1542112 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1542112 00:21:49.667 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1542112 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.609 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.609 rmmod nvme_tcp 00:21:50.870 rmmod nvme_fabrics 00:21:50.870 rmmod nvme_keyring 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1567663 ']' 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1567663 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1567663 ']' 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1567663 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1567663 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1567663' 00:21:50.870 killing process with pid 1567663 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1567663 00:21:50.870 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1567663 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.439 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.a94 /tmp/spdk.key-sha256.91S /tmp/spdk.key-sha384.FcT /tmp/spdk.key-sha512.H6m /tmp/spdk.key-sha512.x6q /tmp/spdk.key-sha384.h3q /tmp/spdk.key-sha256.abN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:53.977 00:21:53.977 real 2m39.150s 00:21:53.977 user 5m56.334s 00:21:53.977 sys 0m24.639s 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.977 ************************************ 00:21:53.977 END TEST nvmf_auth_target 00:21:53.977 ************************************ 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:53.977 ************************************ 00:21:53.977 START TEST nvmf_bdevio_no_huge 00:21:53.977 ************************************ 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:53.977 * Looking for test storage... 00:21:53.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:53.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.977 --rc genhtml_branch_coverage=1 00:21:53.977 --rc genhtml_function_coverage=1 00:21:53.977 --rc genhtml_legend=1 00:21:53.977 --rc geninfo_all_blocks=1 00:21:53.977 --rc geninfo_unexecuted_blocks=1 00:21:53.977 00:21:53.977 ' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:53.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.977 --rc genhtml_branch_coverage=1 00:21:53.977 --rc genhtml_function_coverage=1 00:21:53.977 --rc genhtml_legend=1 00:21:53.977 --rc geninfo_all_blocks=1 00:21:53.977 --rc geninfo_unexecuted_blocks=1 00:21:53.977 00:21:53.977 ' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:53.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.977 --rc genhtml_branch_coverage=1 00:21:53.977 --rc genhtml_function_coverage=1 00:21:53.977 --rc genhtml_legend=1 00:21:53.977 --rc geninfo_all_blocks=1 00:21:53.977 --rc geninfo_unexecuted_blocks=1 00:21:53.977 00:21:53.977 ' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:53.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.977 --rc genhtml_branch_coverage=1 00:21:53.977 --rc genhtml_function_coverage=1 00:21:53.977 --rc genhtml_legend=1 00:21:53.977 --rc geninfo_all_blocks=1 00:21:53.977 --rc geninfo_unexecuted_blocks=1 00:21:53.977 00:21:53.977 ' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.977 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.978 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.143 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:02.144 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:02.144 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:02.144 Found net devices under 0000:31:00.0: cvl_0_0 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:02.144 Found net devices under 0000:31:00.1: cvl_0_1 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.144 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:22:02.144 00:22:02.144 --- 10.0.0.2 ping statistics --- 00:22:02.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.144 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:02.144 00:22:02.144 --- 10.0.0.1 ping statistics --- 00:22:02.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.144 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1576577 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1576577 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1576577 ']' 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.144 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.144 [2024-12-09 05:15:15.354433] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:02.144 [2024-12-09 05:15:15.354576] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:02.144 [2024-12-09 05:15:15.541549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.144 [2024-12-09 05:15:15.660649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.144 [2024-12-09 05:15:15.660707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.144 [2024-12-09 05:15:15.660720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.144 [2024-12-09 05:15:15.660732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.144 [2024-12-09 05:15:15.660742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.144 [2024-12-09 05:15:15.663067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.144 [2024-12-09 05:15:15.663320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:02.144 [2024-12-09 05:15:15.663428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.144 [2024-12-09 05:15:15.663443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:02.144 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.144 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:02.144 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.144 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.144 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 [2024-12-09 05:15:16.176844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 Malloc0 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.406 [2024-12-09 05:15:16.271118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.406 { 00:22:02.406 "params": { 00:22:02.406 "name": "Nvme$subsystem", 00:22:02.406 "trtype": "$TEST_TRANSPORT", 00:22:02.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.406 "adrfam": "ipv4", 00:22:02.406 "trsvcid": "$NVMF_PORT", 00:22:02.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.406 "hdgst": ${hdgst:-false}, 00:22:02.406 "ddgst": ${ddgst:-false} 00:22:02.406 }, 00:22:02.406 "method": "bdev_nvme_attach_controller" 00:22:02.406 } 00:22:02.406 EOF 00:22:02.406 )") 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:02.406 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.406 "params": { 00:22:02.406 "name": "Nvme1", 00:22:02.406 "trtype": "tcp", 00:22:02.406 "traddr": "10.0.0.2", 00:22:02.406 "adrfam": "ipv4", 00:22:02.406 "trsvcid": "4420", 00:22:02.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.406 "hdgst": false, 00:22:02.406 "ddgst": false 00:22:02.406 }, 00:22:02.406 "method": "bdev_nvme_attach_controller" 00:22:02.406 }' 00:22:02.406 [2024-12-09 05:15:16.364359] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:02.406 [2024-12-09 05:15:16.364484] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1576929 ] 00:22:02.666 [2024-12-09 05:15:16.538921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:02.666 [2024-12-09 05:15:16.658350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.666 [2024-12-09 05:15:16.658459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.666 [2024-12-09 05:15:16.658487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.236 I/O targets: 00:22:03.236 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:03.236 00:22:03.236 00:22:03.236 CUnit - A unit testing framework for C - Version 2.1-3 00:22:03.237 http://cunit.sourceforge.net/ 00:22:03.237 00:22:03.237 00:22:03.237 Suite: bdevio tests on: Nvme1n1 00:22:03.237 Test: blockdev write read block ...passed 00:22:03.237 Test: blockdev write zeroes read block ...passed 00:22:03.237 Test: blockdev write zeroes read no split ...passed 00:22:03.237 Test: blockdev write zeroes read split ...passed 00:22:03.497 Test: blockdev write zeroes read split partial ...passed 00:22:03.497 Test: blockdev reset ...[2024-12-09 05:15:17.252585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.497 [2024-12-09 05:15:17.252767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000391500 (9): Bad file descriptor 00:22:03.497 [2024-12-09 05:15:17.272964] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:03.497 passed 00:22:03.497 Test: blockdev write read 8 blocks ...passed 00:22:03.497 Test: blockdev write read size > 128k ...passed 00:22:03.497 Test: blockdev write read invalid size ...passed 00:22:03.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:03.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:03.497 Test: blockdev write read max offset ...passed 00:22:03.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:03.497 Test: blockdev writev readv 8 blocks ...passed 00:22:03.497 Test: blockdev writev readv 30 x 1block ...passed 00:22:03.758 Test: blockdev writev readv block ...passed 00:22:03.758 Test: blockdev writev readv size > 128k ...passed 00:22:03.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:03.759 Test: blockdev comparev and writev ...[2024-12-09 05:15:17.499692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.499753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.499779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.499793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.500222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.500247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.500266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.500279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.500772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.500801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.500829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.500842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.501220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.501242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.501266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.759 [2024-12-09 05:15:17.501279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:03.759 passed 00:22:03.759 Test: blockdev nvme passthru rw ...passed 00:22:03.759 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:15:17.585395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:03.759 [2024-12-09 05:15:17.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.585735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:03.759 [2024-12-09 05:15:17.585755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.586080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:03.759 [2024-12-09 05:15:17.586101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:03.759 [2024-12-09 05:15:17.586380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:03.759 [2024-12-09 05:15:17.586399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:03.759 passed 00:22:03.759 Test: blockdev nvme admin passthru ...passed 00:22:03.759 Test: blockdev copy ...passed 00:22:03.759 00:22:03.759 Run Summary: Type Total Ran Passed Failed Inactive 00:22:03.759 suites 1 1 n/a 0 0 00:22:03.759 tests 23 23 23 0 0 00:22:03.759 asserts 152 152 152 0 n/a 00:22:03.759 00:22:03.759 Elapsed time = 1.231 seconds 00:22:04.329 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.330 rmmod nvme_tcp 00:22:04.330 rmmod nvme_fabrics 00:22:04.330 rmmod nvme_keyring 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1576577 ']' 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1576577 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1576577 ']' 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1576577 00:22:04.330 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576577 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576577' 00:22:04.590 killing process with pid 1576577 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1576577 00:22:04.590 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1576577 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.161 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.157 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.157 00:22:07.157 real 0m13.496s 00:22:07.157 user 0m18.259s 00:22:07.157 sys 0m6.974s 00:22:07.157 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.157 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.157 ************************************ 00:22:07.157 END TEST nvmf_bdevio_no_huge 00:22:07.157 ************************************ 00:22:07.157 05:15:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:07.157 05:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.157 05:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.157 05:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.157 ************************************ 00:22:07.157 START TEST nvmf_tls 00:22:07.157 ************************************ 00:22:07.157 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:07.157 * Looking for test storage... 00:22:07.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.481 --rc genhtml_branch_coverage=1 00:22:07.481 --rc genhtml_function_coverage=1 00:22:07.481 --rc genhtml_legend=1 00:22:07.481 --rc geninfo_all_blocks=1 00:22:07.481 --rc geninfo_unexecuted_blocks=1 00:22:07.481 00:22:07.481 ' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.481 --rc genhtml_branch_coverage=1 00:22:07.481 --rc genhtml_function_coverage=1 00:22:07.481 --rc genhtml_legend=1 00:22:07.481 --rc geninfo_all_blocks=1 00:22:07.481 --rc geninfo_unexecuted_blocks=1 00:22:07.481 00:22:07.481 ' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.481 --rc genhtml_branch_coverage=1 00:22:07.481 --rc genhtml_function_coverage=1 00:22:07.481 --rc genhtml_legend=1 00:22:07.481 --rc geninfo_all_blocks=1 00:22:07.481 --rc geninfo_unexecuted_blocks=1 00:22:07.481 00:22:07.481 ' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.481 --rc genhtml_branch_coverage=1 00:22:07.481 --rc genhtml_function_coverage=1 00:22:07.481 --rc genhtml_legend=1 00:22:07.481 --rc geninfo_all_blocks=1 00:22:07.481 --rc geninfo_unexecuted_blocks=1 00:22:07.481 00:22:07.481 ' 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.481 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.482 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:15.616 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:15.616 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:15.616 Found net devices under 0000:31:00.0: cvl_0_0 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:15.616 Found net devices under 0000:31:00.1: cvl_0_1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:15.616 00:22:15.616 --- 10.0.0.2 ping statistics --- 00:22:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.616 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:15.616 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:22:15.616 00:22:15.616 --- 10.0.0.1 ping statistics --- 00:22:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.616 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1581641 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1581641 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1581641 ']' 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.617 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.617 [2024-12-09 05:15:29.028521] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:15.617 [2024-12-09 05:15:29.028656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.617 [2024-12-09 05:15:29.195271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.617 [2024-12-09 05:15:29.315862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.617 [2024-12-09 05:15:29.315927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.617 [2024-12-09 05:15:29.315940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.617 [2024-12-09 05:15:29.315954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.617 [2024-12-09 05:15:29.315967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.617 [2024-12-09 05:15:29.317462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:15.876 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:16.136 true 00:22:16.136 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.136 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:16.397 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:16.397 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:16.397 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:16.658 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.658 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:16.658 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:16.658 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:16.658 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:16.918 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.918 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:17.180 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:17.180 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:17.180 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.180 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:17.440 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:17.440 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:17.440 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:17.440 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.440 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:17.701 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:17.701 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:17.701 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:17.962 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.962 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:18.223 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.AU58FkesBU 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PsZvmEw32O 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.AU58FkesBU 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PsZvmEw32O 00:22:18.223 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:18.483 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:18.744 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.AU58FkesBU 00:22:18.744 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AU58FkesBU 00:22:18.744 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:19.004 [2024-12-09 05:15:32.778429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.004 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.004 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.264 [2024-12-09 05:15:33.115267] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.264 [2024-12-09 05:15:33.115512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.264 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.525 malloc0 00:22:19.525 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.525 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AU58FkesBU 00:22:19.786 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:19.786 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.AU58FkesBU 00:22:32.012 Initializing NVMe Controllers 00:22:32.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.012 Initialization complete. Launching workers. 00:22:32.012 ======================================================== 00:22:32.012 Latency(us) 00:22:32.012 Device Information : IOPS MiB/s Average min max 00:22:32.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15453.19 60.36 4141.89 1657.46 5843.16 00:22:32.012 ======================================================== 00:22:32.012 Total : 15453.19 60.36 4141.89 1657.46 5843.16 00:22:32.012 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AU58FkesBU 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AU58FkesBU 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1584449 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1584449 /var/tmp/bdevperf.sock 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1584449 ']' 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.012 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.012 [2024-12-09 05:15:44.071884] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:32.012 [2024-12-09 05:15:44.071995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584449 ] 00:22:32.012 [2024-12-09 05:15:44.213035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.012 [2024-12-09 05:15:44.309851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.012 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.012 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.012 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AU58FkesBU 00:22:32.012 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.012 [2024-12-09 05:15:45.135706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.012 TLSTESTn1 00:22:32.012 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:32.012 Running I/O for 10 seconds... 00:22:33.550 3098.00 IOPS, 12.10 MiB/s [2024-12-09T04:15:48.487Z] 2961.50 IOPS, 11.57 MiB/s [2024-12-09T04:15:49.429Z] 3294.67 IOPS, 12.87 MiB/s [2024-12-09T04:15:50.370Z] 3106.50 IOPS, 12.13 MiB/s [2024-12-09T04:15:51.755Z] 3000.40 IOPS, 11.72 MiB/s [2024-12-09T04:15:52.697Z] 2930.00 IOPS, 11.45 MiB/s [2024-12-09T04:15:53.640Z] 3231.00 IOPS, 12.62 MiB/s [2024-12-09T04:15:54.582Z] 3187.75 IOPS, 12.45 MiB/s [2024-12-09T04:15:55.521Z] 3071.33 IOPS, 12.00 MiB/s [2024-12-09T04:15:55.521Z] 3029.20 IOPS, 11.83 MiB/s 00:22:41.524 Latency(us) 00:22:41.524 [2024-12-09T04:15:55.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.524 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.524 Verification LBA range: start 0x0 length 0x2000 00:22:41.524 TLSTESTn1 : 10.01 3037.84 11.87 0.00 0.00 42083.97 5406.72 104857.60 00:22:41.524 [2024-12-09T04:15:55.521Z] =================================================================================================================== 00:22:41.524 [2024-12-09T04:15:55.521Z] Total : 3037.84 11.87 0.00 0.00 42083.97 5406.72 104857.60 00:22:41.524 { 00:22:41.524 "results": [ 00:22:41.524 { 00:22:41.524 "job": "TLSTESTn1", 00:22:41.524 "core_mask": "0x4", 00:22:41.524 "workload": "verify", 00:22:41.524 "status": "finished", 00:22:41.524 "verify_range": { 00:22:41.524 "start": 0, 00:22:41.524 "length": 8192 00:22:41.524 }, 00:22:41.524 "queue_depth": 128, 00:22:41.524 "io_size": 4096, 00:22:41.524 "runtime": 10.013351, 00:22:41.524 "iops": 3037.844174242968, 00:22:41.524 "mibps": 11.866578805636594, 00:22:41.524 "io_failed": 0, 00:22:41.524 "io_timeout": 0, 00:22:41.524 "avg_latency_us": 42083.97468073682, 00:22:41.524 "min_latency_us": 5406.72, 00:22:41.524 "max_latency_us": 104857.6 00:22:41.524 } 00:22:41.524 ], 00:22:41.524 "core_count": 1 00:22:41.524 } 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1584449 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1584449 ']' 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1584449 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1584449 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1584449' 00:22:41.524 killing process with pid 1584449 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1584449 00:22:41.524 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.524 00:22:41.524 Latency(us) 00:22:41.524 [2024-12-09T04:15:55.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.524 [2024-12-09T04:15:55.521Z] =================================================================================================================== 00:22:41.524 [2024-12-09T04:15:55.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.524 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1584449 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsZvmEw32O 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsZvmEw32O 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsZvmEw32O 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PsZvmEw32O 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1586730 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1586730 /var/tmp/bdevperf.sock 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1586730 ']' 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.095 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.095 [2024-12-09 05:15:56.007401] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:42.095 [2024-12-09 05:15:56.007509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586730 ] 00:22:42.355 [2024-12-09 05:15:56.142548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.356 [2024-12-09 05:15:56.216895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.925 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.925 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.925 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PsZvmEw32O 00:22:43.185 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.185 [2024-12-09 05:15:57.118960] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.185 [2024-12-09 05:15:57.129103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:43.185 [2024-12-09 05:15:57.130155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (107): Transport endpoint is not connected 00:22:43.185 [2024-12-09 05:15:57.131139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:22:43.185 [2024-12-09 05:15:57.132145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:43.185 [2024-12-09 05:15:57.132161] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:43.185 [2024-12-09 05:15:57.132174] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:43.185 [2024-12-09 05:15:57.132187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:43.185 request: 00:22:43.185 { 00:22:43.185 "name": "TLSTEST", 00:22:43.185 "trtype": "tcp", 00:22:43.185 "traddr": "10.0.0.2", 00:22:43.185 "adrfam": "ipv4", 00:22:43.185 "trsvcid": "4420", 00:22:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.185 "prchk_reftag": false, 00:22:43.185 "prchk_guard": false, 00:22:43.185 "hdgst": false, 00:22:43.185 "ddgst": false, 00:22:43.185 "psk": "key0", 00:22:43.185 "allow_unrecognized_csi": false, 00:22:43.185 "method": "bdev_nvme_attach_controller", 00:22:43.185 "req_id": 1 00:22:43.185 } 00:22:43.185 Got JSON-RPC error response 00:22:43.185 response: 00:22:43.185 { 00:22:43.185 "code": -5, 00:22:43.185 "message": "Input/output error" 00:22:43.185 } 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1586730 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1586730 ']' 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1586730 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.185 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586730 00:22:43.446 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:43.446 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:43.446 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586730' 00:22:43.446 killing process with pid 1586730 00:22:43.446 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1586730 00:22:43.446 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.446 00:22:43.446 Latency(us) 00:22:43.446 [2024-12-09T04:15:57.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.446 [2024-12-09T04:15:57.443Z] =================================================================================================================== 00:22:43.446 [2024-12-09T04:15:57.443Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.446 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1586730 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AU58FkesBU 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AU58FkesBU 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AU58FkesBU 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AU58FkesBU 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1587072 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1587072 /var/tmp/bdevperf.sock 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1587072 ']' 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.706 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.966 [2024-12-09 05:15:57.741239] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:43.966 [2024-12-09 05:15:57.741344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587072 ] 00:22:43.966 [2024-12-09 05:15:57.865476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.966 [2024-12-09 05:15:57.939920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.538 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.538 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.538 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AU58FkesBU 00:22:44.797 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:45.057 [2024-12-09 05:15:58.858270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.057 [2024-12-09 05:15:58.865237] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:45.057 [2024-12-09 05:15:58.865266] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:45.057 [2024-12-09 05:15:58.865295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:45.057 [2024-12-09 05:15:58.865606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (107): Transport endpoint is not connected 00:22:45.057 [2024-12-09 05:15:58.866589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:22:45.057 [2024-12-09 05:15:58.867591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:45.057 [2024-12-09 05:15:58.867614] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:45.057 [2024-12-09 05:15:58.867628] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:45.057 [2024-12-09 05:15:58.867639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:45.057 request: 00:22:45.057 { 00:22:45.057 "name": "TLSTEST", 00:22:45.057 "trtype": "tcp", 00:22:45.057 "traddr": "10.0.0.2", 00:22:45.057 "adrfam": "ipv4", 00:22:45.057 "trsvcid": "4420", 00:22:45.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.057 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.057 "prchk_reftag": false, 00:22:45.057 "prchk_guard": false, 00:22:45.057 "hdgst": false, 00:22:45.057 "ddgst": false, 00:22:45.057 "psk": "key0", 00:22:45.057 "allow_unrecognized_csi": false, 00:22:45.057 "method": "bdev_nvme_attach_controller", 00:22:45.057 "req_id": 1 00:22:45.057 } 00:22:45.058 Got JSON-RPC error response 00:22:45.058 response: 00:22:45.058 { 00:22:45.058 "code": -5, 00:22:45.058 "message": "Input/output error" 00:22:45.058 } 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1587072 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1587072 ']' 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1587072 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1587072 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1587072' 00:22:45.058 killing process with pid 1587072 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1587072 00:22:45.058 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.058 00:22:45.058 Latency(us) 00:22:45.058 [2024-12-09T04:15:59.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.058 [2024-12-09T04:15:59.055Z] =================================================================================================================== 00:22:45.058 [2024-12-09T04:15:59.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.058 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1587072 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AU58FkesBU 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AU58FkesBU 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AU58FkesBU 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AU58FkesBU 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1587419 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1587419 /var/tmp/bdevperf.sock 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1587419 ']' 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.628 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.628 [2024-12-09 05:15:59.490369] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:45.628 [2024-12-09 05:15:59.490476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587419 ] 00:22:45.889 [2024-12-09 05:15:59.622982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.889 [2024-12-09 05:15:59.696519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.460 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.460 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.460 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AU58FkesBU 00:22:46.460 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.720 [2024-12-09 05:16:00.594608] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.720 [2024-12-09 05:16:00.607483] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:46.720 [2024-12-09 05:16:00.607509] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:46.721 [2024-12-09 05:16:00.607535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:46.721 [2024-12-09 05:16:00.607859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (107): Transport endpoint is not connected 00:22:46.721 [2024-12-09 05:16:00.608841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:22:46.721 [2024-12-09 05:16:00.609842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:46.721 [2024-12-09 05:16:00.609857] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:46.721 [2024-12-09 05:16:00.609869] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:46.721 [2024-12-09 05:16:00.609880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:46.721 request: 00:22:46.721 { 00:22:46.721 "name": "TLSTEST", 00:22:46.721 "trtype": "tcp", 00:22:46.721 "traddr": "10.0.0.2", 00:22:46.721 "adrfam": "ipv4", 00:22:46.721 "trsvcid": "4420", 00:22:46.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.721 "prchk_reftag": false, 00:22:46.721 "prchk_guard": false, 00:22:46.721 "hdgst": false, 00:22:46.721 "ddgst": false, 00:22:46.721 "psk": "key0", 00:22:46.721 "allow_unrecognized_csi": false, 00:22:46.721 "method": "bdev_nvme_attach_controller", 00:22:46.721 "req_id": 1 00:22:46.721 } 00:22:46.721 Got JSON-RPC error response 00:22:46.721 response: 00:22:46.721 { 00:22:46.721 "code": -5, 00:22:46.721 "message": "Input/output error" 00:22:46.721 } 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1587419 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1587419 ']' 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1587419 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1587419 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1587419' 00:22:46.721 killing process with pid 1587419 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1587419 00:22:46.721 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.721 00:22:46.721 Latency(us) 00:22:46.721 [2024-12-09T04:16:00.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.721 [2024-12-09T04:16:00.718Z] =================================================================================================================== 00:22:46.721 [2024-12-09T04:16:00.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.721 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1587419 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1587769 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1587769 /var/tmp/bdevperf.sock 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1587769 ']' 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.292 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.292 [2024-12-09 05:16:01.237143] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:47.292 [2024-12-09 05:16:01.237246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587769 ] 00:22:47.553 [2024-12-09 05:16:01.369768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.553 [2024-12-09 05:16:01.444336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.124 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.124 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.124 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:48.385 [2024-12-09 05:16:02.157971] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:48.385 [2024-12-09 05:16:02.158009] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:48.385 request: 00:22:48.385 { 00:22:48.385 "name": "key0", 00:22:48.385 "path": "", 00:22:48.385 "method": "keyring_file_add_key", 00:22:48.385 "req_id": 1 00:22:48.385 } 00:22:48.385 Got JSON-RPC error response 00:22:48.385 response: 00:22:48.385 { 00:22:48.385 "code": -1, 00:22:48.385 "message": "Operation not permitted" 00:22:48.385 } 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.385 [2024-12-09 05:16:02.342524] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.385 [2024-12-09 05:16:02.342566] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:48.385 request: 00:22:48.385 { 00:22:48.385 "name": "TLSTEST", 00:22:48.385 "trtype": "tcp", 00:22:48.385 "traddr": "10.0.0.2", 00:22:48.385 "adrfam": "ipv4", 00:22:48.385 "trsvcid": "4420", 00:22:48.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.385 "prchk_reftag": false, 00:22:48.385 "prchk_guard": false, 00:22:48.385 "hdgst": false, 00:22:48.385 "ddgst": false, 00:22:48.385 "psk": "key0", 00:22:48.385 "allow_unrecognized_csi": false, 00:22:48.385 "method": "bdev_nvme_attach_controller", 00:22:48.385 "req_id": 1 00:22:48.385 } 00:22:48.385 Got JSON-RPC error response 00:22:48.385 response: 00:22:48.385 { 00:22:48.385 "code": -126, 00:22:48.385 "message": "Required key not available" 00:22:48.385 } 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1587769 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1587769 ']' 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1587769 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.385 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1587769 00:22:48.646 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:48.646 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:48.646 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1587769' 00:22:48.646 killing process with pid 1587769 00:22:48.646 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1587769 00:22:48.646 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.646 00:22:48.646 Latency(us) 00:22:48.646 [2024-12-09T04:16:02.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.646 [2024-12-09T04:16:02.643Z] =================================================================================================================== 00:22:48.646 [2024-12-09T04:16:02.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.646 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1587769 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1581641 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1581641 ']' 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1581641 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.907 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581641 00:22:49.169 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.169 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.169 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581641' 00:22:49.169 killing process with pid 1581641 00:22:49.169 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1581641 00:22:49.169 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1581641 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.o7vytNf5Us 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.o7vytNf5Us 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1588418 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1588418 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1588418 ']' 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.740 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.000 [2024-12-09 05:16:03.754250] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:50.000 [2024-12-09 05:16:03.754382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.000 [2024-12-09 05:16:03.907626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.000 [2024-12-09 05:16:03.987266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.000 [2024-12-09 05:16:03.987306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.000 [2024-12-09 05:16:03.987314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.000 [2024-12-09 05:16:03.987323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.000 [2024-12-09 05:16:03.987331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.000 [2024-12-09 05:16:03.988260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o7vytNf5Us 00:22:50.569 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.828 [2024-12-09 05:16:04.711149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.828 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.089 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.089 [2024-12-09 05:16:05.072073] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.089 [2024-12-09 05:16:05.072309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.349 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.349 malloc0 00:22:51.349 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.608 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o7vytNf5Us 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o7vytNf5Us 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1588811 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1588811 /var/tmp/bdevperf.sock 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1588811 ']' 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.868 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.127 [2024-12-09 05:16:05.920409] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:52.127 [2024-12-09 05:16:05.920518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588811 ] 00:22:52.127 [2024-12-09 05:16:06.065554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.387 [2024-12-09 05:16:06.161592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.956 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.956 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.956 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:22:52.956 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.215 [2024-12-09 05:16:07.023130] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.215 TLSTESTn1 00:22:53.215 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.474 Running I/O for 10 seconds... 00:22:55.349 3711.00 IOPS, 14.50 MiB/s [2024-12-09T04:16:10.291Z] 3691.50 IOPS, 14.42 MiB/s [2024-12-09T04:16:11.673Z] 3785.33 IOPS, 14.79 MiB/s [2024-12-09T04:16:12.245Z] 3770.75 IOPS, 14.73 MiB/s [2024-12-09T04:16:13.627Z] 3648.20 IOPS, 14.25 MiB/s [2024-12-09T04:16:14.568Z] 3450.00 IOPS, 13.48 MiB/s [2024-12-09T04:16:15.507Z] 3382.29 IOPS, 13.21 MiB/s [2024-12-09T04:16:16.448Z] 3501.88 IOPS, 13.68 MiB/s [2024-12-09T04:16:17.392Z] 3507.22 IOPS, 13.70 MiB/s [2024-12-09T04:16:17.392Z] 3455.30 IOPS, 13.50 MiB/s 00:23:03.395 Latency(us) 00:23:03.395 [2024-12-09T04:16:17.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.395 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.395 Verification LBA range: start 0x0 length 0x2000 00:23:03.395 TLSTESTn1 : 10.09 3436.18 13.42 0.00 0.00 37097.25 6908.59 161655.47 00:23:03.395 [2024-12-09T04:16:17.392Z] =================================================================================================================== 00:23:03.395 [2024-12-09T04:16:17.392Z] Total : 3436.18 13.42 0.00 0.00 37097.25 6908.59 161655.47 00:23:03.395 { 00:23:03.395 "results": [ 00:23:03.395 { 00:23:03.395 "job": "TLSTESTn1", 00:23:03.395 "core_mask": "0x4", 00:23:03.395 "workload": "verify", 00:23:03.395 "status": "finished", 00:23:03.395 "verify_range": { 00:23:03.395 "start": 0, 00:23:03.395 "length": 8192 00:23:03.395 }, 00:23:03.395 "queue_depth": 128, 00:23:03.395 "io_size": 4096, 00:23:03.395 "runtime": 10.092893, 00:23:03.395 "iops": 3436.180290428126, 00:23:03.395 "mibps": 13.422579259484868, 00:23:03.395 "io_failed": 0, 00:23:03.395 "io_timeout": 0, 00:23:03.395 "avg_latency_us": 37097.25228530511, 00:23:03.395 "min_latency_us": 6908.586666666667, 00:23:03.395 "max_latency_us": 161655.46666666667 00:23:03.395 } 00:23:03.395 ], 00:23:03.395 "core_count": 1 00:23:03.395 } 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1588811 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1588811 ']' 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1588811 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.395 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588811 00:23:03.656 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:03.656 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:03.656 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588811' 00:23:03.656 killing process with pid 1588811 00:23:03.656 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1588811 00:23:03.656 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.656 00:23:03.656 Latency(us) 00:23:03.656 [2024-12-09T04:16:17.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.656 [2024-12-09T04:16:17.654Z] =================================================================================================================== 00:23:03.657 [2024-12-09T04:16:17.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.657 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1588811 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.o7vytNf5Us 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o7vytNf5Us 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o7vytNf5Us 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.918 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o7vytNf5Us 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o7vytNf5Us 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1591172 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1591172 /var/tmp/bdevperf.sock 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1591172 ']' 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.179 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.179 [2024-12-09 05:16:17.993935] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:04.179 [2024-12-09 05:16:17.994044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591172 ] 00:23:04.179 [2024-12-09 05:16:18.125727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.440 [2024-12-09 05:16:18.198578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.012 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.012 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.012 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:05.012 [2024-12-09 05:16:18.919985] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.o7vytNf5Us': 0100666 00:23:05.012 [2024-12-09 05:16:18.920020] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:05.012 request: 00:23:05.012 { 00:23:05.012 "name": "key0", 00:23:05.012 "path": "/tmp/tmp.o7vytNf5Us", 00:23:05.012 "method": "keyring_file_add_key", 00:23:05.013 "req_id": 1 00:23:05.013 } 00:23:05.013 Got JSON-RPC error response 00:23:05.013 response: 00:23:05.013 { 00:23:05.013 "code": -1, 00:23:05.013 "message": "Operation not permitted" 00:23:05.013 } 00:23:05.013 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.274 [2024-12-09 05:16:19.104525] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.274 [2024-12-09 05:16:19.104568] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:05.274 request: 00:23:05.274 { 00:23:05.274 "name": "TLSTEST", 00:23:05.274 "trtype": "tcp", 00:23:05.274 "traddr": "10.0.0.2", 00:23:05.274 "adrfam": "ipv4", 00:23:05.274 "trsvcid": "4420", 00:23:05.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.274 "prchk_reftag": false, 00:23:05.274 "prchk_guard": false, 00:23:05.274 "hdgst": false, 00:23:05.274 "ddgst": false, 00:23:05.274 "psk": "key0", 00:23:05.274 "allow_unrecognized_csi": false, 00:23:05.274 "method": "bdev_nvme_attach_controller", 00:23:05.274 "req_id": 1 00:23:05.274 } 00:23:05.274 Got JSON-RPC error response 00:23:05.274 response: 00:23:05.274 { 00:23:05.274 "code": -126, 00:23:05.274 "message": "Required key not available" 00:23:05.274 } 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1591172 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1591172 ']' 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1591172 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1591172 00:23:05.274 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.275 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.275 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1591172' 00:23:05.275 killing process with pid 1591172 00:23:05.275 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1591172 00:23:05.275 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.275 00:23:05.275 Latency(us) 00:23:05.275 [2024-12-09T04:16:19.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.275 [2024-12-09T04:16:19.272Z] =================================================================================================================== 00:23:05.275 [2024-12-09T04:16:19.272Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.275 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1591172 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1588418 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1588418 ']' 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1588418 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588418 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588418' 00:23:05.847 killing process with pid 1588418 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1588418 00:23:05.847 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1588418 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1591529 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1591529 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1591529 ']' 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.420 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.681 [2024-12-09 05:16:20.436334] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:06.681 [2024-12-09 05:16:20.436440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.681 [2024-12-09 05:16:20.583258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.681 [2024-12-09 05:16:20.663690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.681 [2024-12-09 05:16:20.663731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.681 [2024-12-09 05:16:20.663740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.681 [2024-12-09 05:16:20.663751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.681 [2024-12-09 05:16:20.663760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.681 [2024-12-09 05:16:20.664709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:23:07.254 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:07.514 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.514 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:07.514 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.515 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:23:07.515 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o7vytNf5Us 00:23:07.515 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.515 [2024-12-09 05:16:21.404396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.515 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.775 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.775 [2024-12-09 05:16:21.741256] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.775 [2024-12-09 05:16:21.741518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.775 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.036 malloc0 00:23:08.036 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.297 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:08.297 [2024-12-09 05:16:22.234439] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.o7vytNf5Us': 0100666 00:23:08.297 [2024-12-09 05:16:22.234471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:08.297 request: 00:23:08.297 { 00:23:08.297 "name": "key0", 00:23:08.297 "path": "/tmp/tmp.o7vytNf5Us", 00:23:08.297 "method": "keyring_file_add_key", 00:23:08.297 "req_id": 1 00:23:08.297 } 00:23:08.297 Got JSON-RPC error response 00:23:08.297 response: 00:23:08.297 { 00:23:08.297 "code": -1, 00:23:08.297 "message": "Operation not permitted" 00:23:08.297 } 00:23:08.297 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.558 [2024-12-09 05:16:22.402892] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:08.558 [2024-12-09 05:16:22.402935] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:08.558 request: 00:23:08.558 { 00:23:08.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.558 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.558 "psk": "key0", 00:23:08.558 "method": "nvmf_subsystem_add_host", 00:23:08.558 "req_id": 1 00:23:08.558 } 00:23:08.558 Got JSON-RPC error response 00:23:08.558 response: 00:23:08.558 { 00:23:08.558 "code": -32603, 00:23:08.558 "message": "Internal error" 00:23:08.558 } 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1591529 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1591529 ']' 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1591529 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1591529 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1591529' 00:23:08.558 killing process with pid 1591529 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1591529 00:23:08.558 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1591529 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.o7vytNf5Us 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1592225 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1592225 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1592225 ']' 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.130 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.390 [2024-12-09 05:16:23.205345] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:09.390 [2024-12-09 05:16:23.205454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.390 [2024-12-09 05:16:23.352210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.650 [2024-12-09 05:16:23.425187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.650 [2024-12-09 05:16:23.425224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.650 [2024-12-09 05:16:23.425233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.650 [2024-12-09 05:16:23.425241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.651 [2024-12-09 05:16:23.425249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.651 [2024-12-09 05:16:23.426179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o7vytNf5Us 00:23:10.222 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.222 [2024-12-09 05:16:24.148060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.222 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.483 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.483 [2024-12-09 05:16:24.464836] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.483 [2024-12-09 05:16:24.465082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.744 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.744 malloc0 00:23:10.744 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.004 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:11.004 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.269 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.269 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1592591 00:23:11.269 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.269 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1592591 /var/tmp/bdevperf.sock 00:23:11.269 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1592591 ']' 00:23:11.270 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.270 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.270 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.270 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.270 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.270 [2024-12-09 05:16:25.168336] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:11.270 [2024-12-09 05:16:25.168441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592591 ] 00:23:11.530 [2024-12-09 05:16:25.299436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.530 [2024-12-09 05:16:25.372708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.100 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.100 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.100 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:12.360 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.360 [2024-12-09 05:16:26.262132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.360 TLSTESTn1 00:23:12.620 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:12.880 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:12.880 "subsystems": [ 00:23:12.880 { 00:23:12.880 "subsystem": "keyring", 00:23:12.880 "config": [ 00:23:12.880 { 00:23:12.880 "method": "keyring_file_add_key", 00:23:12.880 "params": { 00:23:12.880 "name": "key0", 00:23:12.880 "path": "/tmp/tmp.o7vytNf5Us" 00:23:12.880 } 00:23:12.880 } 00:23:12.880 ] 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "subsystem": "iobuf", 00:23:12.880 "config": [ 00:23:12.880 { 00:23:12.880 "method": "iobuf_set_options", 00:23:12.880 "params": { 00:23:12.880 "small_pool_count": 8192, 00:23:12.880 "large_pool_count": 1024, 00:23:12.880 "small_bufsize": 8192, 00:23:12.880 "large_bufsize": 135168, 00:23:12.880 "enable_numa": false 00:23:12.880 } 00:23:12.880 } 00:23:12.880 ] 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "subsystem": "sock", 00:23:12.880 "config": [ 00:23:12.880 { 00:23:12.880 "method": "sock_set_default_impl", 00:23:12.880 "params": { 00:23:12.880 "impl_name": "posix" 00:23:12.880 } 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "method": "sock_impl_set_options", 00:23:12.880 "params": { 00:23:12.880 "impl_name": "ssl", 00:23:12.880 "recv_buf_size": 4096, 00:23:12.880 "send_buf_size": 4096, 00:23:12.880 "enable_recv_pipe": true, 00:23:12.880 "enable_quickack": false, 00:23:12.880 "enable_placement_id": 0, 00:23:12.880 "enable_zerocopy_send_server": true, 00:23:12.880 "enable_zerocopy_send_client": false, 00:23:12.880 "zerocopy_threshold": 0, 00:23:12.880 "tls_version": 0, 00:23:12.880 "enable_ktls": false 00:23:12.880 } 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "method": "sock_impl_set_options", 00:23:12.880 "params": { 00:23:12.880 "impl_name": "posix", 00:23:12.880 "recv_buf_size": 2097152, 00:23:12.880 "send_buf_size": 2097152, 00:23:12.880 "enable_recv_pipe": true, 00:23:12.880 "enable_quickack": false, 00:23:12.880 "enable_placement_id": 0, 00:23:12.880 "enable_zerocopy_send_server": true, 00:23:12.880 "enable_zerocopy_send_client": false, 00:23:12.880 "zerocopy_threshold": 0, 00:23:12.880 "tls_version": 0, 00:23:12.880 "enable_ktls": false 00:23:12.880 } 00:23:12.880 } 00:23:12.880 ] 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "subsystem": "vmd", 00:23:12.880 "config": [] 00:23:12.880 }, 00:23:12.880 { 00:23:12.880 "subsystem": "accel", 00:23:12.880 "config": [ 00:23:12.881 { 00:23:12.881 "method": "accel_set_options", 00:23:12.881 "params": { 00:23:12.881 "small_cache_size": 128, 00:23:12.881 "large_cache_size": 16, 00:23:12.881 "task_count": 2048, 00:23:12.881 "sequence_count": 2048, 00:23:12.881 "buf_count": 2048 00:23:12.881 } 00:23:12.881 } 00:23:12.881 ] 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "subsystem": "bdev", 00:23:12.881 "config": [ 00:23:12.881 { 00:23:12.881 "method": "bdev_set_options", 00:23:12.881 "params": { 00:23:12.881 "bdev_io_pool_size": 65535, 00:23:12.881 "bdev_io_cache_size": 256, 00:23:12.881 "bdev_auto_examine": true, 00:23:12.881 "iobuf_small_cache_size": 128, 00:23:12.881 "iobuf_large_cache_size": 16 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_raid_set_options", 00:23:12.881 "params": { 00:23:12.881 "process_window_size_kb": 1024, 00:23:12.881 "process_max_bandwidth_mb_sec": 0 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_iscsi_set_options", 00:23:12.881 "params": { 00:23:12.881 "timeout_sec": 30 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_nvme_set_options", 00:23:12.881 "params": { 00:23:12.881 "action_on_timeout": "none", 00:23:12.881 "timeout_us": 0, 00:23:12.881 "timeout_admin_us": 0, 00:23:12.881 "keep_alive_timeout_ms": 10000, 00:23:12.881 "arbitration_burst": 0, 00:23:12.881 "low_priority_weight": 0, 00:23:12.881 "medium_priority_weight": 0, 00:23:12.881 "high_priority_weight": 0, 00:23:12.881 "nvme_adminq_poll_period_us": 10000, 00:23:12.881 "nvme_ioq_poll_period_us": 0, 00:23:12.881 "io_queue_requests": 0, 00:23:12.881 "delay_cmd_submit": true, 00:23:12.881 "transport_retry_count": 4, 00:23:12.881 "bdev_retry_count": 3, 00:23:12.881 "transport_ack_timeout": 0, 00:23:12.881 "ctrlr_loss_timeout_sec": 0, 00:23:12.881 "reconnect_delay_sec": 0, 00:23:12.881 "fast_io_fail_timeout_sec": 0, 00:23:12.881 "disable_auto_failback": false, 00:23:12.881 "generate_uuids": false, 00:23:12.881 "transport_tos": 0, 00:23:12.881 "nvme_error_stat": false, 00:23:12.881 "rdma_srq_size": 0, 00:23:12.881 "io_path_stat": false, 00:23:12.881 "allow_accel_sequence": false, 00:23:12.881 "rdma_max_cq_size": 0, 00:23:12.881 "rdma_cm_event_timeout_ms": 0, 00:23:12.881 "dhchap_digests": [ 00:23:12.881 "sha256", 00:23:12.881 "sha384", 00:23:12.881 "sha512" 00:23:12.881 ], 00:23:12.881 "dhchap_dhgroups": [ 00:23:12.881 "null", 00:23:12.881 "ffdhe2048", 00:23:12.881 "ffdhe3072", 00:23:12.881 "ffdhe4096", 00:23:12.881 "ffdhe6144", 00:23:12.881 "ffdhe8192" 00:23:12.881 ] 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_nvme_set_hotplug", 00:23:12.881 "params": { 00:23:12.881 "period_us": 100000, 00:23:12.881 "enable": false 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_malloc_create", 00:23:12.881 "params": { 00:23:12.881 "name": "malloc0", 00:23:12.881 "num_blocks": 8192, 00:23:12.881 "block_size": 4096, 00:23:12.881 "physical_block_size": 4096, 00:23:12.881 "uuid": "601435ad-ebeb-45fc-a475-d5cc2623f228", 00:23:12.881 "optimal_io_boundary": 0, 00:23:12.881 "md_size": 0, 00:23:12.881 "dif_type": 0, 00:23:12.881 "dif_is_head_of_md": false, 00:23:12.881 "dif_pi_format": 0 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "bdev_wait_for_examine" 00:23:12.881 } 00:23:12.881 ] 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "subsystem": "nbd", 00:23:12.881 "config": [] 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "subsystem": "scheduler", 00:23:12.881 "config": [ 00:23:12.881 { 00:23:12.881 "method": "framework_set_scheduler", 00:23:12.881 "params": { 00:23:12.881 "name": "static" 00:23:12.881 } 00:23:12.881 } 00:23:12.881 ] 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "subsystem": "nvmf", 00:23:12.881 "config": [ 00:23:12.881 { 00:23:12.881 "method": "nvmf_set_config", 00:23:12.881 "params": { 00:23:12.881 "discovery_filter": "match_any", 00:23:12.881 "admin_cmd_passthru": { 00:23:12.881 "identify_ctrlr": false 00:23:12.881 }, 00:23:12.881 "dhchap_digests": [ 00:23:12.881 "sha256", 00:23:12.881 "sha384", 00:23:12.881 "sha512" 00:23:12.881 ], 00:23:12.881 "dhchap_dhgroups": [ 00:23:12.881 "null", 00:23:12.881 "ffdhe2048", 00:23:12.881 "ffdhe3072", 00:23:12.881 "ffdhe4096", 00:23:12.881 "ffdhe6144", 00:23:12.881 "ffdhe8192" 00:23:12.881 ] 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_set_max_subsystems", 00:23:12.881 "params": { 00:23:12.881 "max_subsystems": 1024 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_set_crdt", 00:23:12.881 "params": { 00:23:12.881 "crdt1": 0, 00:23:12.881 "crdt2": 0, 00:23:12.881 "crdt3": 0 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_create_transport", 00:23:12.881 "params": { 00:23:12.881 "trtype": "TCP", 00:23:12.881 "max_queue_depth": 128, 00:23:12.881 "max_io_qpairs_per_ctrlr": 127, 00:23:12.881 "in_capsule_data_size": 4096, 00:23:12.881 "max_io_size": 131072, 00:23:12.881 "io_unit_size": 131072, 00:23:12.881 "max_aq_depth": 128, 00:23:12.881 "num_shared_buffers": 511, 00:23:12.881 "buf_cache_size": 4294967295, 00:23:12.881 "dif_insert_or_strip": false, 00:23:12.881 "zcopy": false, 00:23:12.881 "c2h_success": false, 00:23:12.881 "sock_priority": 0, 00:23:12.881 "abort_timeout_sec": 1, 00:23:12.881 "ack_timeout": 0, 00:23:12.881 "data_wr_pool_size": 0 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_create_subsystem", 00:23:12.881 "params": { 00:23:12.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.881 "allow_any_host": false, 00:23:12.881 "serial_number": "SPDK00000000000001", 00:23:12.881 "model_number": "SPDK bdev Controller", 00:23:12.881 "max_namespaces": 10, 00:23:12.881 "min_cntlid": 1, 00:23:12.881 "max_cntlid": 65519, 00:23:12.881 "ana_reporting": false 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_subsystem_add_host", 00:23:12.881 "params": { 00:23:12.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.881 "host": "nqn.2016-06.io.spdk:host1", 00:23:12.881 "psk": "key0" 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_subsystem_add_ns", 00:23:12.881 "params": { 00:23:12.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.881 "namespace": { 00:23:12.881 "nsid": 1, 00:23:12.881 "bdev_name": "malloc0", 00:23:12.881 "nguid": "601435ADEBEB45FCA475D5CC2623F228", 00:23:12.881 "uuid": "601435ad-ebeb-45fc-a475-d5cc2623f228", 00:23:12.881 "no_auto_visible": false 00:23:12.881 } 00:23:12.881 } 00:23:12.881 }, 00:23:12.881 { 00:23:12.881 "method": "nvmf_subsystem_add_listener", 00:23:12.881 "params": { 00:23:12.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.881 "listen_address": { 00:23:12.881 "trtype": "TCP", 00:23:12.881 "adrfam": "IPv4", 00:23:12.881 "traddr": "10.0.0.2", 00:23:12.881 "trsvcid": "4420" 00:23:12.881 }, 00:23:12.881 "secure_channel": true 00:23:12.881 } 00:23:12.881 } 00:23:12.881 ] 00:23:12.881 } 00:23:12.881 ] 00:23:12.881 }' 00:23:12.881 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:13.142 "subsystems": [ 00:23:13.142 { 00:23:13.142 "subsystem": "keyring", 00:23:13.142 "config": [ 00:23:13.142 { 00:23:13.142 "method": "keyring_file_add_key", 00:23:13.142 "params": { 00:23:13.142 "name": "key0", 00:23:13.142 "path": "/tmp/tmp.o7vytNf5Us" 00:23:13.142 } 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "iobuf", 00:23:13.142 "config": [ 00:23:13.142 { 00:23:13.142 "method": "iobuf_set_options", 00:23:13.142 "params": { 00:23:13.142 "small_pool_count": 8192, 00:23:13.142 "large_pool_count": 1024, 00:23:13.142 "small_bufsize": 8192, 00:23:13.142 "large_bufsize": 135168, 00:23:13.142 "enable_numa": false 00:23:13.142 } 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "sock", 00:23:13.142 "config": [ 00:23:13.142 { 00:23:13.142 "method": "sock_set_default_impl", 00:23:13.142 "params": { 00:23:13.142 "impl_name": "posix" 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "sock_impl_set_options", 00:23:13.142 "params": { 00:23:13.142 "impl_name": "ssl", 00:23:13.142 "recv_buf_size": 4096, 00:23:13.142 "send_buf_size": 4096, 00:23:13.142 "enable_recv_pipe": true, 00:23:13.142 "enable_quickack": false, 00:23:13.142 "enable_placement_id": 0, 00:23:13.142 "enable_zerocopy_send_server": true, 00:23:13.142 "enable_zerocopy_send_client": false, 00:23:13.142 "zerocopy_threshold": 0, 00:23:13.142 "tls_version": 0, 00:23:13.142 "enable_ktls": false 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "sock_impl_set_options", 00:23:13.142 "params": { 00:23:13.142 "impl_name": "posix", 00:23:13.142 "recv_buf_size": 2097152, 00:23:13.142 "send_buf_size": 2097152, 00:23:13.142 "enable_recv_pipe": true, 00:23:13.142 "enable_quickack": false, 00:23:13.142 "enable_placement_id": 0, 00:23:13.142 "enable_zerocopy_send_server": true, 00:23:13.142 "enable_zerocopy_send_client": false, 00:23:13.142 "zerocopy_threshold": 0, 00:23:13.142 "tls_version": 0, 00:23:13.142 "enable_ktls": false 00:23:13.142 } 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "vmd", 00:23:13.142 "config": [] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "accel", 00:23:13.142 "config": [ 00:23:13.142 { 00:23:13.142 "method": "accel_set_options", 00:23:13.142 "params": { 00:23:13.142 "small_cache_size": 128, 00:23:13.142 "large_cache_size": 16, 00:23:13.142 "task_count": 2048, 00:23:13.142 "sequence_count": 2048, 00:23:13.142 "buf_count": 2048 00:23:13.142 } 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "bdev", 00:23:13.142 "config": [ 00:23:13.142 { 00:23:13.142 "method": "bdev_set_options", 00:23:13.142 "params": { 00:23:13.142 "bdev_io_pool_size": 65535, 00:23:13.142 "bdev_io_cache_size": 256, 00:23:13.142 "bdev_auto_examine": true, 00:23:13.142 "iobuf_small_cache_size": 128, 00:23:13.142 "iobuf_large_cache_size": 16 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_raid_set_options", 00:23:13.142 "params": { 00:23:13.142 "process_window_size_kb": 1024, 00:23:13.142 "process_max_bandwidth_mb_sec": 0 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_iscsi_set_options", 00:23:13.142 "params": { 00:23:13.142 "timeout_sec": 30 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_nvme_set_options", 00:23:13.142 "params": { 00:23:13.142 "action_on_timeout": "none", 00:23:13.142 "timeout_us": 0, 00:23:13.142 "timeout_admin_us": 0, 00:23:13.142 "keep_alive_timeout_ms": 10000, 00:23:13.142 "arbitration_burst": 0, 00:23:13.142 "low_priority_weight": 0, 00:23:13.142 "medium_priority_weight": 0, 00:23:13.142 "high_priority_weight": 0, 00:23:13.142 "nvme_adminq_poll_period_us": 10000, 00:23:13.142 "nvme_ioq_poll_period_us": 0, 00:23:13.142 "io_queue_requests": 512, 00:23:13.142 "delay_cmd_submit": true, 00:23:13.142 "transport_retry_count": 4, 00:23:13.142 "bdev_retry_count": 3, 00:23:13.142 "transport_ack_timeout": 0, 00:23:13.142 "ctrlr_loss_timeout_sec": 0, 00:23:13.142 "reconnect_delay_sec": 0, 00:23:13.142 "fast_io_fail_timeout_sec": 0, 00:23:13.142 "disable_auto_failback": false, 00:23:13.142 "generate_uuids": false, 00:23:13.142 "transport_tos": 0, 00:23:13.142 "nvme_error_stat": false, 00:23:13.142 "rdma_srq_size": 0, 00:23:13.142 "io_path_stat": false, 00:23:13.142 "allow_accel_sequence": false, 00:23:13.142 "rdma_max_cq_size": 0, 00:23:13.142 "rdma_cm_event_timeout_ms": 0, 00:23:13.142 "dhchap_digests": [ 00:23:13.142 "sha256", 00:23:13.142 "sha384", 00:23:13.142 "sha512" 00:23:13.142 ], 00:23:13.142 "dhchap_dhgroups": [ 00:23:13.142 "null", 00:23:13.142 "ffdhe2048", 00:23:13.142 "ffdhe3072", 00:23:13.142 "ffdhe4096", 00:23:13.142 "ffdhe6144", 00:23:13.142 "ffdhe8192" 00:23:13.142 ] 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_nvme_attach_controller", 00:23:13.142 "params": { 00:23:13.142 "name": "TLSTEST", 00:23:13.142 "trtype": "TCP", 00:23:13.142 "adrfam": "IPv4", 00:23:13.142 "traddr": "10.0.0.2", 00:23:13.142 "trsvcid": "4420", 00:23:13.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.142 "prchk_reftag": false, 00:23:13.142 "prchk_guard": false, 00:23:13.142 "ctrlr_loss_timeout_sec": 0, 00:23:13.142 "reconnect_delay_sec": 0, 00:23:13.142 "fast_io_fail_timeout_sec": 0, 00:23:13.142 "psk": "key0", 00:23:13.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.142 "hdgst": false, 00:23:13.142 "ddgst": false, 00:23:13.142 "multipath": "multipath" 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_nvme_set_hotplug", 00:23:13.142 "params": { 00:23:13.142 "period_us": 100000, 00:23:13.142 "enable": false 00:23:13.142 } 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "method": "bdev_wait_for_examine" 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }, 00:23:13.142 { 00:23:13.142 "subsystem": "nbd", 00:23:13.142 "config": [] 00:23:13.142 } 00:23:13.142 ] 00:23:13.142 }' 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1592591 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1592591 ']' 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1592591 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592591 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592591' 00:23:13.142 killing process with pid 1592591 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1592591 00:23:13.142 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.142 00:23:13.142 Latency(us) 00:23:13.142 [2024-12-09T04:16:27.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.142 [2024-12-09T04:16:27.139Z] =================================================================================================================== 00:23:13.142 [2024-12-09T04:16:27.139Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.142 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1592591 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1592225 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1592225 ']' 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1592225 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592225 00:23:13.664 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.664 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.664 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592225' 00:23:13.664 killing process with pid 1592225 00:23:13.664 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1592225 00:23:13.664 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1592225 00:23:14.239 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:14.239 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.239 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.239 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.239 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:14.239 "subsystems": [ 00:23:14.239 { 00:23:14.239 "subsystem": "keyring", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "keyring_file_add_key", 00:23:14.239 "params": { 00:23:14.239 "name": "key0", 00:23:14.239 "path": "/tmp/tmp.o7vytNf5Us" 00:23:14.239 } 00:23:14.239 } 00:23:14.239 ] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "iobuf", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "iobuf_set_options", 00:23:14.239 "params": { 00:23:14.239 "small_pool_count": 8192, 00:23:14.239 "large_pool_count": 1024, 00:23:14.239 "small_bufsize": 8192, 00:23:14.239 "large_bufsize": 135168, 00:23:14.239 "enable_numa": false 00:23:14.239 } 00:23:14.239 } 00:23:14.239 ] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "sock", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "sock_set_default_impl", 00:23:14.239 "params": { 00:23:14.239 "impl_name": "posix" 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "sock_impl_set_options", 00:23:14.239 "params": { 00:23:14.239 "impl_name": "ssl", 00:23:14.239 "recv_buf_size": 4096, 00:23:14.239 "send_buf_size": 4096, 00:23:14.239 "enable_recv_pipe": true, 00:23:14.239 "enable_quickack": false, 00:23:14.239 "enable_placement_id": 0, 00:23:14.239 "enable_zerocopy_send_server": true, 00:23:14.239 "enable_zerocopy_send_client": false, 00:23:14.239 "zerocopy_threshold": 0, 00:23:14.239 "tls_version": 0, 00:23:14.239 "enable_ktls": false 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "sock_impl_set_options", 00:23:14.239 "params": { 00:23:14.239 "impl_name": "posix", 00:23:14.239 "recv_buf_size": 2097152, 00:23:14.239 "send_buf_size": 2097152, 00:23:14.239 "enable_recv_pipe": true, 00:23:14.239 "enable_quickack": false, 00:23:14.239 "enable_placement_id": 0, 00:23:14.239 "enable_zerocopy_send_server": true, 00:23:14.239 "enable_zerocopy_send_client": false, 00:23:14.239 "zerocopy_threshold": 0, 00:23:14.239 "tls_version": 0, 00:23:14.239 "enable_ktls": false 00:23:14.239 } 00:23:14.239 } 00:23:14.239 ] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "vmd", 00:23:14.239 "config": [] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "accel", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "accel_set_options", 00:23:14.239 "params": { 00:23:14.239 "small_cache_size": 128, 00:23:14.239 "large_cache_size": 16, 00:23:14.239 "task_count": 2048, 00:23:14.239 "sequence_count": 2048, 00:23:14.239 "buf_count": 2048 00:23:14.239 } 00:23:14.239 } 00:23:14.239 ] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "bdev", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "bdev_set_options", 00:23:14.239 "params": { 00:23:14.239 "bdev_io_pool_size": 65535, 00:23:14.239 "bdev_io_cache_size": 256, 00:23:14.239 "bdev_auto_examine": true, 00:23:14.239 "iobuf_small_cache_size": 128, 00:23:14.239 "iobuf_large_cache_size": 16 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_raid_set_options", 00:23:14.239 "params": { 00:23:14.239 "process_window_size_kb": 1024, 00:23:14.239 "process_max_bandwidth_mb_sec": 0 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_iscsi_set_options", 00:23:14.239 "params": { 00:23:14.239 "timeout_sec": 30 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_nvme_set_options", 00:23:14.239 "params": { 00:23:14.239 "action_on_timeout": "none", 00:23:14.239 "timeout_us": 0, 00:23:14.239 "timeout_admin_us": 0, 00:23:14.239 "keep_alive_timeout_ms": 10000, 00:23:14.239 "arbitration_burst": 0, 00:23:14.239 "low_priority_weight": 0, 00:23:14.239 "medium_priority_weight": 0, 00:23:14.239 "high_priority_weight": 0, 00:23:14.239 "nvme_adminq_poll_period_us": 10000, 00:23:14.239 "nvme_ioq_poll_period_us": 0, 00:23:14.239 "io_queue_requests": 0, 00:23:14.239 "delay_cmd_submit": true, 00:23:14.239 "transport_retry_count": 4, 00:23:14.239 "bdev_retry_count": 3, 00:23:14.239 "transport_ack_timeout": 0, 00:23:14.239 "ctrlr_loss_timeout_sec": 0, 00:23:14.239 "reconnect_delay_sec": 0, 00:23:14.239 "fast_io_fail_timeout_sec": 0, 00:23:14.239 "disable_auto_failback": false, 00:23:14.239 "generate_uuids": false, 00:23:14.239 "transport_tos": 0, 00:23:14.239 "nvme_error_stat": false, 00:23:14.239 "rdma_srq_size": 0, 00:23:14.239 "io_path_stat": false, 00:23:14.239 "allow_accel_sequence": false, 00:23:14.239 "rdma_max_cq_size": 0, 00:23:14.239 "rdma_cm_event_timeout_ms": 0, 00:23:14.239 "dhchap_digests": [ 00:23:14.239 "sha256", 00:23:14.239 "sha384", 00:23:14.239 "sha512" 00:23:14.239 ], 00:23:14.239 "dhchap_dhgroups": [ 00:23:14.239 "null", 00:23:14.239 "ffdhe2048", 00:23:14.239 "ffdhe3072", 00:23:14.239 "ffdhe4096", 00:23:14.239 "ffdhe6144", 00:23:14.239 "ffdhe8192" 00:23:14.239 ] 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_nvme_set_hotplug", 00:23:14.239 "params": { 00:23:14.239 "period_us": 100000, 00:23:14.239 "enable": false 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_malloc_create", 00:23:14.239 "params": { 00:23:14.239 "name": "malloc0", 00:23:14.239 "num_blocks": 8192, 00:23:14.239 "block_size": 4096, 00:23:14.239 "physical_block_size": 4096, 00:23:14.239 "uuid": "601435ad-ebeb-45fc-a475-d5cc2623f228", 00:23:14.239 "optimal_io_boundary": 0, 00:23:14.239 "md_size": 0, 00:23:14.239 "dif_type": 0, 00:23:14.239 "dif_is_head_of_md": false, 00:23:14.239 "dif_pi_format": 0 00:23:14.239 } 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "method": "bdev_wait_for_examine" 00:23:14.239 } 00:23:14.239 ] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "nbd", 00:23:14.239 "config": [] 00:23:14.239 }, 00:23:14.239 { 00:23:14.239 "subsystem": "scheduler", 00:23:14.239 "config": [ 00:23:14.239 { 00:23:14.239 "method": "framework_set_scheduler", 00:23:14.239 "params": { 00:23:14.239 "name": "static" 00:23:14.240 } 00:23:14.240 } 00:23:14.240 ] 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "subsystem": "nvmf", 00:23:14.240 "config": [ 00:23:14.240 { 00:23:14.240 "method": "nvmf_set_config", 00:23:14.240 "params": { 00:23:14.240 "discovery_filter": "match_any", 00:23:14.240 "admin_cmd_passthru": { 00:23:14.240 "identify_ctrlr": false 00:23:14.240 }, 00:23:14.240 "dhchap_digests": [ 00:23:14.240 "sha256", 00:23:14.240 "sha384", 00:23:14.240 "sha512" 00:23:14.240 ], 00:23:14.240 "dhchap_dhgroups": [ 00:23:14.240 "null", 00:23:14.240 "ffdhe2048", 00:23:14.240 "ffdhe3072", 00:23:14.240 "ffdhe4096", 00:23:14.240 "ffdhe6144", 00:23:14.240 "ffdhe8192" 00:23:14.240 ] 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_set_max_subsystems", 00:23:14.240 "params": { 00:23:14.240 "max_subsystems": 1024 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_set_crdt", 00:23:14.240 "params": { 00:23:14.240 "crdt1": 0, 00:23:14.240 "crdt2": 0, 00:23:14.240 "crdt3": 0 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_create_transport", 00:23:14.240 "params": { 00:23:14.240 "trtype": "TCP", 00:23:14.240 "max_queue_depth": 128, 00:23:14.240 "max_io_qpairs_per_ctrlr": 127, 00:23:14.240 "in_capsule_data_size": 4096, 00:23:14.240 "max_io_size": 131072, 00:23:14.240 "io_unit_size": 131072, 00:23:14.240 "max_aq_depth": 128, 00:23:14.240 "num_shared_buffers": 511, 00:23:14.240 "buf_cache_size": 4294967295, 00:23:14.240 "dif_insert_or_strip": false, 00:23:14.240 "zcopy": false, 00:23:14.240 "c2h_success": false, 00:23:14.240 "sock_priority": 0, 00:23:14.240 "abort_timeout_sec": 1, 00:23:14.240 "ack_timeout": 0, 00:23:14.240 "data_wr_pool_size": 0 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_create_subsystem", 00:23:14.240 "params": { 00:23:14.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.240 "allow_any_host": false, 00:23:14.240 "serial_number": "SPDK00000000000001", 00:23:14.240 "model_number": "SPDK bdev Controller", 00:23:14.240 "max_namespaces": 10, 00:23:14.240 "min_cntlid": 1, 00:23:14.240 "max_cntlid": 65519, 00:23:14.240 "ana_reporting": false 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_subsystem_add_host", 00:23:14.240 "params": { 00:23:14.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.240 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.240 "psk": "key0" 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_subsystem_add_ns", 00:23:14.240 "params": { 00:23:14.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.240 "namespace": { 00:23:14.240 "nsid": 1, 00:23:14.240 "bdev_name": "malloc0", 00:23:14.240 "nguid": "601435ADEBEB45FCA475D5CC2623F228", 00:23:14.240 "uuid": "601435ad-ebeb-45fc-a475-d5cc2623f228", 00:23:14.240 "no_auto_visible": false 00:23:14.240 } 00:23:14.240 } 00:23:14.240 }, 00:23:14.240 { 00:23:14.240 "method": "nvmf_subsystem_add_listener", 00:23:14.240 "params": { 00:23:14.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.240 "listen_address": { 00:23:14.240 "trtype": "TCP", 00:23:14.240 "adrfam": "IPv4", 00:23:14.240 "traddr": "10.0.0.2", 00:23:14.240 "trsvcid": "4420" 00:23:14.240 }, 00:23:14.240 "secure_channel": true 00:23:14.240 } 00:23:14.240 } 00:23:14.240 ] 00:23:14.240 } 00:23:14.240 ] 00:23:14.240 }' 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1593180 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1593180 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1593180 ']' 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.240 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.240 [2024-12-09 05:16:28.161041] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:14.240 [2024-12-09 05:16:28.161155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.501 [2024-12-09 05:16:28.313213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.501 [2024-12-09 05:16:28.393183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.501 [2024-12-09 05:16:28.393222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.501 [2024-12-09 05:16:28.393231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.501 [2024-12-09 05:16:28.393240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.501 [2024-12-09 05:16:28.393248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.501 [2024-12-09 05:16:28.394246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.761 [2024-12-09 05:16:28.732375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.022 [2024-12-09 05:16:28.764411] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.022 [2024-12-09 05:16:28.764655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1593308 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1593308 /var/tmp/bdevperf.sock 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1593308 ']' 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:15.022 "subsystems": [ 00:23:15.022 { 00:23:15.022 "subsystem": "keyring", 00:23:15.022 "config": [ 00:23:15.022 { 00:23:15.022 "method": "keyring_file_add_key", 00:23:15.022 "params": { 00:23:15.022 "name": "key0", 00:23:15.022 "path": "/tmp/tmp.o7vytNf5Us" 00:23:15.022 } 00:23:15.022 } 00:23:15.022 ] 00:23:15.022 }, 00:23:15.022 { 00:23:15.022 "subsystem": "iobuf", 00:23:15.022 "config": [ 00:23:15.022 { 00:23:15.022 "method": "iobuf_set_options", 00:23:15.022 "params": { 00:23:15.022 "small_pool_count": 8192, 00:23:15.022 "large_pool_count": 1024, 00:23:15.022 "small_bufsize": 8192, 00:23:15.022 "large_bufsize": 135168, 00:23:15.022 "enable_numa": false 00:23:15.022 } 00:23:15.022 } 00:23:15.022 ] 00:23:15.022 }, 00:23:15.022 { 00:23:15.022 "subsystem": "sock", 00:23:15.022 "config": [ 00:23:15.022 { 00:23:15.022 "method": "sock_set_default_impl", 00:23:15.022 "params": { 00:23:15.022 "impl_name": "posix" 00:23:15.022 } 00:23:15.022 }, 00:23:15.022 { 00:23:15.022 "method": "sock_impl_set_options", 00:23:15.022 "params": { 00:23:15.022 "impl_name": "ssl", 00:23:15.022 "recv_buf_size": 4096, 00:23:15.022 "send_buf_size": 4096, 00:23:15.022 "enable_recv_pipe": true, 00:23:15.022 "enable_quickack": false, 00:23:15.022 "enable_placement_id": 0, 00:23:15.022 "enable_zerocopy_send_server": true, 00:23:15.022 "enable_zerocopy_send_client": false, 00:23:15.022 "zerocopy_threshold": 0, 00:23:15.022 "tls_version": 0, 00:23:15.022 "enable_ktls": false 00:23:15.022 } 00:23:15.022 }, 00:23:15.022 { 00:23:15.022 "method": "sock_impl_set_options", 00:23:15.022 "params": { 00:23:15.022 "impl_name": "posix", 00:23:15.022 "recv_buf_size": 2097152, 00:23:15.022 "send_buf_size": 2097152, 00:23:15.022 "enable_recv_pipe": true, 00:23:15.022 "enable_quickack": false, 00:23:15.022 "enable_placement_id": 0, 00:23:15.022 "enable_zerocopy_send_server": true, 00:23:15.022 "enable_zerocopy_send_client": false, 00:23:15.022 "zerocopy_threshold": 0, 00:23:15.022 "tls_version": 0, 00:23:15.022 "enable_ktls": false 00:23:15.022 } 00:23:15.022 } 00:23:15.022 ] 00:23:15.022 }, 00:23:15.023 { 00:23:15.023 "subsystem": "vmd", 00:23:15.023 "config": [] 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "subsystem": "accel", 00:23:15.023 "config": [ 00:23:15.023 { 00:23:15.023 "method": "accel_set_options", 00:23:15.023 "params": { 00:23:15.023 "small_cache_size": 128, 00:23:15.023 "large_cache_size": 16, 00:23:15.023 "task_count": 2048, 00:23:15.023 "sequence_count": 2048, 00:23:15.023 "buf_count": 2048 00:23:15.023 } 00:23:15.023 } 00:23:15.023 ] 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "subsystem": "bdev", 00:23:15.023 "config": [ 00:23:15.023 { 00:23:15.023 "method": "bdev_set_options", 00:23:15.023 "params": { 00:23:15.023 "bdev_io_pool_size": 65535, 00:23:15.023 "bdev_io_cache_size": 256, 00:23:15.023 "bdev_auto_examine": true, 00:23:15.023 "iobuf_small_cache_size": 128, 00:23:15.023 "iobuf_large_cache_size": 16 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_raid_set_options", 00:23:15.023 "params": { 00:23:15.023 "process_window_size_kb": 1024, 00:23:15.023 "process_max_bandwidth_mb_sec": 0 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_iscsi_set_options", 00:23:15.023 "params": { 00:23:15.023 "timeout_sec": 30 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_nvme_set_options", 00:23:15.023 "params": { 00:23:15.023 "action_on_timeout": "none", 00:23:15.023 "timeout_us": 0, 00:23:15.023 "timeout_admin_us": 0, 00:23:15.023 "keep_alive_timeout_ms": 10000, 00:23:15.023 "arbitration_burst": 0, 00:23:15.023 "low_priority_weight": 0, 00:23:15.023 "medium_priority_weight": 0, 00:23:15.023 "high_priority_weight": 0, 00:23:15.023 "nvme_adminq_poll_period_us": 10000, 00:23:15.023 "nvme_ioq_poll_period_us": 0, 00:23:15.023 "io_queue_requests": 512, 00:23:15.023 "delay_cmd_submit": true, 00:23:15.023 "transport_retry_count": 4, 00:23:15.023 "bdev_retry_count": 3, 00:23:15.023 "transport_ack_timeout": 0, 00:23:15.023 "ctrlr_loss_timeout_sec": 0, 00:23:15.023 "reconnect_delay_sec": 0, 00:23:15.023 "fast_io_fail_timeout_sec": 0, 00:23:15.023 "disable_auto_failback": false, 00:23:15.023 "generate_uuids": false, 00:23:15.023 "transport_tos": 0, 00:23:15.023 "nvme_error_stat": false, 00:23:15.023 "rdma_srq_size": 0, 00:23:15.023 "io_path_stat": false, 00:23:15.023 "allow_accel_sequence": false, 00:23:15.023 "rdma_max_cq_size": 0, 00:23:15.023 "rdma_cm_event_timeout_ms": 0, 00:23:15.023 "dhchap_digests": [ 00:23:15.023 "sha256", 00:23:15.023 "sha384", 00:23:15.023 "sha512" 00:23:15.023 ], 00:23:15.023 "dhchap_dhgroups": [ 00:23:15.023 "null", 00:23:15.023 "ffdhe2048", 00:23:15.023 "ffdhe3072", 00:23:15.023 "ffdhe4096", 00:23:15.023 "ffdhe6144", 00:23:15.023 "ffdhe8192" 00:23:15.023 ] 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_nvme_attach_controller", 00:23:15.023 "params": { 00:23:15.023 "name": "TLSTEST", 00:23:15.023 "trtype": "TCP", 00:23:15.023 "adrfam": "IPv4", 00:23:15.023 "traddr": "10.0.0.2", 00:23:15.023 "trsvcid": "4420", 00:23:15.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.023 "prchk_reftag": false, 00:23:15.023 "prchk_guard": false, 00:23:15.023 "ctrlr_loss_timeout_sec": 0, 00:23:15.023 "reconnect_delay_sec": 0, 00:23:15.023 "fast_io_fail_timeout_sec": 0, 00:23:15.023 "psk": "key0", 00:23:15.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.023 "hdgst": false, 00:23:15.023 "ddgst": false, 00:23:15.023 "multipath": "multipath" 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_nvme_set_hotplug", 00:23:15.023 "params": { 00:23:15.023 "period_us": 100000, 00:23:15.023 "enable": false 00:23:15.023 } 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "method": "bdev_wait_for_examine" 00:23:15.023 } 00:23:15.023 ] 00:23:15.023 }, 00:23:15.023 { 00:23:15.023 "subsystem": "nbd", 00:23:15.023 "config": [] 00:23:15.023 } 00:23:15.023 ] 00:23:15.023 }' 00:23:15.284 [2024-12-09 05:16:29.033609] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:15.284 [2024-12-09 05:16:29.033719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593308 ] 00:23:15.284 [2024-12-09 05:16:29.166892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.284 [2024-12-09 05:16:29.240717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.546 [2024-12-09 05:16:29.502508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.806 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.806 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.806 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.067 Running I/O for 10 seconds... 00:23:17.949 1506.00 IOPS, 5.88 MiB/s [2024-12-09T04:16:32.889Z] 1838.00 IOPS, 7.18 MiB/s [2024-12-09T04:16:34.274Z] 1988.33 IOPS, 7.77 MiB/s [2024-12-09T04:16:35.275Z] 1905.75 IOPS, 7.44 MiB/s [2024-12-09T04:16:36.225Z] 1933.80 IOPS, 7.55 MiB/s [2024-12-09T04:16:37.226Z] 2167.67 IOPS, 8.47 MiB/s [2024-12-09T04:16:38.165Z] 2395.57 IOPS, 9.36 MiB/s [2024-12-09T04:16:39.108Z] 2347.25 IOPS, 9.17 MiB/s [2024-12-09T04:16:40.050Z] 2295.78 IOPS, 8.97 MiB/s [2024-12-09T04:16:40.050Z] 2333.40 IOPS, 9.11 MiB/s 00:23:26.053 Latency(us) 00:23:26.053 [2024-12-09T04:16:40.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.053 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.053 Verification LBA range: start 0x0 length 0x2000 00:23:26.053 TLSTESTn1 : 10.02 2342.54 9.15 0.00 0.00 54586.58 5133.65 132819.63 00:23:26.053 [2024-12-09T04:16:40.050Z] =================================================================================================================== 00:23:26.053 [2024-12-09T04:16:40.050Z] Total : 2342.54 9.15 0.00 0.00 54586.58 5133.65 132819.63 00:23:26.053 { 00:23:26.053 "results": [ 00:23:26.053 { 00:23:26.053 "job": "TLSTESTn1", 00:23:26.053 "core_mask": "0x4", 00:23:26.053 "workload": "verify", 00:23:26.053 "status": "finished", 00:23:26.053 "verify_range": { 00:23:26.053 "start": 0, 00:23:26.053 "length": 8192 00:23:26.053 }, 00:23:26.053 "queue_depth": 128, 00:23:26.053 "io_size": 4096, 00:23:26.053 "runtime": 10.015202, 00:23:26.053 "iops": 2342.538872406168, 00:23:26.053 "mibps": 9.150542470336594, 00:23:26.053 "io_failed": 0, 00:23:26.053 "io_timeout": 0, 00:23:26.053 "avg_latency_us": 54586.576035690436, 00:23:26.053 "min_latency_us": 5133.653333333334, 00:23:26.053 "max_latency_us": 132819.62666666668 00:23:26.053 } 00:23:26.053 ], 00:23:26.053 "core_count": 1 00:23:26.053 } 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1593308 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1593308 ']' 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1593308 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593308 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593308' 00:23:26.053 killing process with pid 1593308 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1593308 00:23:26.053 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.053 00:23:26.053 Latency(us) 00:23:26.053 [2024-12-09T04:16:40.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.053 [2024-12-09T04:16:40.050Z] =================================================================================================================== 00:23:26.053 [2024-12-09T04:16:40.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.053 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1593308 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1593180 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1593180 ']' 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1593180 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593180 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593180' 00:23:26.625 killing process with pid 1593180 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1593180 00:23:26.625 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1593180 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1595670 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1595670 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1595670 ']' 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.197 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.459 [2024-12-09 05:16:41.240331] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:27.459 [2024-12-09 05:16:41.240437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.459 [2024-12-09 05:16:41.394702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.719 [2024-12-09 05:16:41.513805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.719 [2024-12-09 05:16:41.513886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.719 [2024-12-09 05:16:41.513900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.719 [2024-12-09 05:16:41.513913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.719 [2024-12-09 05:16:41.513926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.719 [2024-12-09 05:16:41.515415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.o7vytNf5Us 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o7vytNf5Us 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.292 [2024-12-09 05:16:42.218658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.292 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.553 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.814 [2024-12-09 05:16:42.567615] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.814 [2024-12-09 05:16:42.568076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.814 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:28.814 malloc0 00:23:28.814 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.075 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:29.336 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.596 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:29.596 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1596039 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1596039 /var/tmp/bdevperf.sock 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1596039 ']' 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.597 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.597 [2024-12-09 05:16:43.408491] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:29.597 [2024-12-09 05:16:43.408623] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596039 ] 00:23:29.597 [2024-12-09 05:16:43.554554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.857 [2024-12-09 05:16:43.631800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.428 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.429 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.429 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:30.429 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:30.688 [2024-12-09 05:16:44.526662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.688 nvme0n1 00:23:30.688 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.948 Running I/O for 1 seconds... 00:23:31.889 1157.00 IOPS, 4.52 MiB/s 00:23:31.889 Latency(us) 00:23:31.889 [2024-12-09T04:16:45.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.889 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.889 Verification LBA range: start 0x0 length 0x2000 00:23:31.889 nvme0n1 : 1.11 1156.34 4.52 0.00 0.00 106302.10 6826.67 124955.31 00:23:31.889 [2024-12-09T04:16:45.886Z] =================================================================================================================== 00:23:31.889 [2024-12-09T04:16:45.886Z] Total : 1156.34 4.52 0.00 0.00 106302.10 6826.67 124955.31 00:23:31.889 { 00:23:31.889 "results": [ 00:23:31.889 { 00:23:31.889 "job": "nvme0n1", 00:23:31.889 "core_mask": "0x2", 00:23:31.889 "workload": "verify", 00:23:31.889 "status": "finished", 00:23:31.889 "verify_range": { 00:23:31.889 "start": 0, 00:23:31.889 "length": 8192 00:23:31.889 }, 00:23:31.889 "queue_depth": 128, 00:23:31.889 "io_size": 4096, 00:23:31.889 "runtime": 1.112127, 00:23:31.889 "iops": 1156.3427558183553, 00:23:31.889 "mibps": 4.51696388991545, 00:23:31.889 "io_failed": 0, 00:23:31.889 "io_timeout": 0, 00:23:31.889 "avg_latency_us": 106302.10355624676, 00:23:31.889 "min_latency_us": 6826.666666666667, 00:23:31.889 "max_latency_us": 124955.30666666667 00:23:31.889 } 00:23:31.889 ], 00:23:31.889 "core_count": 1 00:23:31.889 } 00:23:31.889 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1596039 00:23:31.889 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1596039 ']' 00:23:31.889 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1596039 00:23:31.889 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.889 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596039 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596039' 00:23:32.149 killing process with pid 1596039 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1596039 00:23:32.149 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.149 00:23:32.149 Latency(us) 00:23:32.149 [2024-12-09T04:16:46.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.149 [2024-12-09T04:16:46.146Z] =================================================================================================================== 00:23:32.149 [2024-12-09T04:16:46.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.149 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1596039 00:23:32.408 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1595670 00:23:32.408 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1595670 ']' 00:23:32.408 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1595670 00:23:32.408 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.408 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.409 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595670 00:23:32.668 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.668 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.668 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595670' 00:23:32.668 killing process with pid 1595670 00:23:32.668 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1595670 00:23:32.668 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1595670 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1596814 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1596814 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1596814 ']' 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.240 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.241 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.241 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.241 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.241 [2024-12-09 05:16:47.173009] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:33.241 [2024-12-09 05:16:47.173127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.501 [2024-12-09 05:16:47.319065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.501 [2024-12-09 05:16:47.396537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.501 [2024-12-09 05:16:47.396576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.501 [2024-12-09 05:16:47.396585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.501 [2024-12-09 05:16:47.396593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.501 [2024-12-09 05:16:47.396601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.501 [2024-12-09 05:16:47.397562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.069 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.069 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.069 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.069 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.070 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.070 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.070 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:34.070 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.070 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.070 [2024-12-09 05:16:47.964672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.070 malloc0 00:23:34.070 [2024-12-09 05:16:48.004448] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.070 [2024-12-09 05:16:48.004682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1597066 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1597066 /var/tmp/bdevperf.sock 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1597066 ']' 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.070 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.332 [2024-12-09 05:16:48.110459] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:34.332 [2024-12-09 05:16:48.110567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597066 ] 00:23:34.332 [2024-12-09 05:16:48.241070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.332 [2024-12-09 05:16:48.315679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.901 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.901 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.901 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o7vytNf5Us 00:23:35.159 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:35.420 [2024-12-09 05:16:49.174097] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.420 nvme0n1 00:23:35.420 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.420 Running I/O for 1 seconds... 00:23:36.803 1501.00 IOPS, 5.86 MiB/s 00:23:36.803 Latency(us) 00:23:36.803 [2024-12-09T04:16:50.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.803 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.803 Verification LBA range: start 0x0 length 0x2000 00:23:36.803 nvme0n1 : 1.07 1518.98 5.93 0.00 0.00 82114.75 7427.41 197481.81 00:23:36.803 [2024-12-09T04:16:50.800Z] =================================================================================================================== 00:23:36.803 [2024-12-09T04:16:50.800Z] Total : 1518.98 5.93 0.00 0.00 82114.75 7427.41 197481.81 00:23:36.803 { 00:23:36.803 "results": [ 00:23:36.803 { 00:23:36.803 "job": "nvme0n1", 00:23:36.803 "core_mask": "0x2", 00:23:36.803 "workload": "verify", 00:23:36.803 "status": "finished", 00:23:36.803 "verify_range": { 00:23:36.803 "start": 0, 00:23:36.803 "length": 8192 00:23:36.804 }, 00:23:36.804 "queue_depth": 128, 00:23:36.804 "io_size": 4096, 00:23:36.804 "runtime": 1.073085, 00:23:36.804 "iops": 1518.9849825503106, 00:23:36.804 "mibps": 5.933535088087151, 00:23:36.804 "io_failed": 0, 00:23:36.804 "io_timeout": 0, 00:23:36.804 "avg_latency_us": 82114.74846625768, 00:23:36.804 "min_latency_us": 7427.413333333333, 00:23:36.804 "max_latency_us": 197481.81333333332 00:23:36.804 } 00:23:36.804 ], 00:23:36.804 "core_count": 1 00:23:36.804 } 00:23:36.804 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:36.804 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.804 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.804 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.804 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:36.804 "subsystems": [ 00:23:36.804 { 00:23:36.804 "subsystem": "keyring", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "keyring_file_add_key", 00:23:36.804 "params": { 00:23:36.804 "name": "key0", 00:23:36.804 "path": "/tmp/tmp.o7vytNf5Us" 00:23:36.804 } 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "iobuf", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "iobuf_set_options", 00:23:36.804 "params": { 00:23:36.804 "small_pool_count": 8192, 00:23:36.804 "large_pool_count": 1024, 00:23:36.804 "small_bufsize": 8192, 00:23:36.804 "large_bufsize": 135168, 00:23:36.804 "enable_numa": false 00:23:36.804 } 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "sock", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "sock_set_default_impl", 00:23:36.804 "params": { 00:23:36.804 "impl_name": "posix" 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "sock_impl_set_options", 00:23:36.804 "params": { 00:23:36.804 "impl_name": "ssl", 00:23:36.804 "recv_buf_size": 4096, 00:23:36.804 "send_buf_size": 4096, 00:23:36.804 "enable_recv_pipe": true, 00:23:36.804 "enable_quickack": false, 00:23:36.804 "enable_placement_id": 0, 00:23:36.804 "enable_zerocopy_send_server": true, 00:23:36.804 "enable_zerocopy_send_client": false, 00:23:36.804 "zerocopy_threshold": 0, 00:23:36.804 "tls_version": 0, 00:23:36.804 "enable_ktls": false 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "sock_impl_set_options", 00:23:36.804 "params": { 00:23:36.804 "impl_name": "posix", 00:23:36.804 "recv_buf_size": 2097152, 00:23:36.804 "send_buf_size": 2097152, 00:23:36.804 "enable_recv_pipe": true, 00:23:36.804 "enable_quickack": false, 00:23:36.804 "enable_placement_id": 0, 00:23:36.804 "enable_zerocopy_send_server": true, 00:23:36.804 "enable_zerocopy_send_client": false, 00:23:36.804 "zerocopy_threshold": 0, 00:23:36.804 "tls_version": 0, 00:23:36.804 "enable_ktls": false 00:23:36.804 } 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "vmd", 00:23:36.804 "config": [] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "accel", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "accel_set_options", 00:23:36.804 "params": { 00:23:36.804 "small_cache_size": 128, 00:23:36.804 "large_cache_size": 16, 00:23:36.804 "task_count": 2048, 00:23:36.804 "sequence_count": 2048, 00:23:36.804 "buf_count": 2048 00:23:36.804 } 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "bdev", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "bdev_set_options", 00:23:36.804 "params": { 00:23:36.804 "bdev_io_pool_size": 65535, 00:23:36.804 "bdev_io_cache_size": 256, 00:23:36.804 "bdev_auto_examine": true, 00:23:36.804 "iobuf_small_cache_size": 128, 00:23:36.804 "iobuf_large_cache_size": 16 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_raid_set_options", 00:23:36.804 "params": { 00:23:36.804 "process_window_size_kb": 1024, 00:23:36.804 "process_max_bandwidth_mb_sec": 0 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_iscsi_set_options", 00:23:36.804 "params": { 00:23:36.804 "timeout_sec": 30 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_nvme_set_options", 00:23:36.804 "params": { 00:23:36.804 "action_on_timeout": "none", 00:23:36.804 "timeout_us": 0, 00:23:36.804 "timeout_admin_us": 0, 00:23:36.804 "keep_alive_timeout_ms": 10000, 00:23:36.804 "arbitration_burst": 0, 00:23:36.804 "low_priority_weight": 0, 00:23:36.804 "medium_priority_weight": 0, 00:23:36.804 "high_priority_weight": 0, 00:23:36.804 "nvme_adminq_poll_period_us": 10000, 00:23:36.804 "nvme_ioq_poll_period_us": 0, 00:23:36.804 "io_queue_requests": 0, 00:23:36.804 "delay_cmd_submit": true, 00:23:36.804 "transport_retry_count": 4, 00:23:36.804 "bdev_retry_count": 3, 00:23:36.804 "transport_ack_timeout": 0, 00:23:36.804 "ctrlr_loss_timeout_sec": 0, 00:23:36.804 "reconnect_delay_sec": 0, 00:23:36.804 "fast_io_fail_timeout_sec": 0, 00:23:36.804 "disable_auto_failback": false, 00:23:36.804 "generate_uuids": false, 00:23:36.804 "transport_tos": 0, 00:23:36.804 "nvme_error_stat": false, 00:23:36.804 "rdma_srq_size": 0, 00:23:36.804 "io_path_stat": false, 00:23:36.804 "allow_accel_sequence": false, 00:23:36.804 "rdma_max_cq_size": 0, 00:23:36.804 "rdma_cm_event_timeout_ms": 0, 00:23:36.804 "dhchap_digests": [ 00:23:36.804 "sha256", 00:23:36.804 "sha384", 00:23:36.804 "sha512" 00:23:36.804 ], 00:23:36.804 "dhchap_dhgroups": [ 00:23:36.804 "null", 00:23:36.804 "ffdhe2048", 00:23:36.804 "ffdhe3072", 00:23:36.804 "ffdhe4096", 00:23:36.804 "ffdhe6144", 00:23:36.804 "ffdhe8192" 00:23:36.804 ] 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_nvme_set_hotplug", 00:23:36.804 "params": { 00:23:36.804 "period_us": 100000, 00:23:36.804 "enable": false 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_malloc_create", 00:23:36.804 "params": { 00:23:36.804 "name": "malloc0", 00:23:36.804 "num_blocks": 8192, 00:23:36.804 "block_size": 4096, 00:23:36.804 "physical_block_size": 4096, 00:23:36.804 "uuid": "ed3f87b4-306d-4a3d-9624-01b0bd065220", 00:23:36.804 "optimal_io_boundary": 0, 00:23:36.804 "md_size": 0, 00:23:36.804 "dif_type": 0, 00:23:36.804 "dif_is_head_of_md": false, 00:23:36.804 "dif_pi_format": 0 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "bdev_wait_for_examine" 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "nbd", 00:23:36.804 "config": [] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "scheduler", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "framework_set_scheduler", 00:23:36.804 "params": { 00:23:36.804 "name": "static" 00:23:36.804 } 00:23:36.804 } 00:23:36.804 ] 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "subsystem": "nvmf", 00:23:36.804 "config": [ 00:23:36.804 { 00:23:36.804 "method": "nvmf_set_config", 00:23:36.804 "params": { 00:23:36.804 "discovery_filter": "match_any", 00:23:36.804 "admin_cmd_passthru": { 00:23:36.804 "identify_ctrlr": false 00:23:36.804 }, 00:23:36.804 "dhchap_digests": [ 00:23:36.804 "sha256", 00:23:36.804 "sha384", 00:23:36.804 "sha512" 00:23:36.804 ], 00:23:36.804 "dhchap_dhgroups": [ 00:23:36.804 "null", 00:23:36.804 "ffdhe2048", 00:23:36.804 "ffdhe3072", 00:23:36.804 "ffdhe4096", 00:23:36.804 "ffdhe6144", 00:23:36.804 "ffdhe8192" 00:23:36.804 ] 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "nvmf_set_max_subsystems", 00:23:36.804 "params": { 00:23:36.804 "max_subsystems": 1024 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "nvmf_set_crdt", 00:23:36.804 "params": { 00:23:36.804 "crdt1": 0, 00:23:36.804 "crdt2": 0, 00:23:36.804 "crdt3": 0 00:23:36.804 } 00:23:36.804 }, 00:23:36.804 { 00:23:36.804 "method": "nvmf_create_transport", 00:23:36.804 "params": { 00:23:36.804 "trtype": "TCP", 00:23:36.804 "max_queue_depth": 128, 00:23:36.804 "max_io_qpairs_per_ctrlr": 127, 00:23:36.804 "in_capsule_data_size": 4096, 00:23:36.804 "max_io_size": 131072, 00:23:36.804 "io_unit_size": 131072, 00:23:36.804 "max_aq_depth": 128, 00:23:36.804 "num_shared_buffers": 511, 00:23:36.804 "buf_cache_size": 4294967295, 00:23:36.804 "dif_insert_or_strip": false, 00:23:36.804 "zcopy": false, 00:23:36.804 "c2h_success": false, 00:23:36.804 "sock_priority": 0, 00:23:36.804 "abort_timeout_sec": 1, 00:23:36.804 "ack_timeout": 0, 00:23:36.805 "data_wr_pool_size": 0 00:23:36.805 } 00:23:36.805 }, 00:23:36.805 { 00:23:36.805 "method": "nvmf_create_subsystem", 00:23:36.805 "params": { 00:23:36.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.805 "allow_any_host": false, 00:23:36.805 "serial_number": "00000000000000000000", 00:23:36.805 "model_number": "SPDK bdev Controller", 00:23:36.805 "max_namespaces": 32, 00:23:36.805 "min_cntlid": 1, 00:23:36.805 "max_cntlid": 65519, 00:23:36.805 "ana_reporting": false 00:23:36.805 } 00:23:36.805 }, 00:23:36.805 { 00:23:36.805 "method": "nvmf_subsystem_add_host", 00:23:36.805 "params": { 00:23:36.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.805 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.805 "psk": "key0" 00:23:36.805 } 00:23:36.805 }, 00:23:36.805 { 00:23:36.805 "method": "nvmf_subsystem_add_ns", 00:23:36.805 "params": { 00:23:36.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.805 "namespace": { 00:23:36.805 "nsid": 1, 00:23:36.805 "bdev_name": "malloc0", 00:23:36.805 "nguid": "ED3F87B4306D4A3D962401B0BD065220", 00:23:36.805 "uuid": "ed3f87b4-306d-4a3d-9624-01b0bd065220", 00:23:36.805 "no_auto_visible": false 00:23:36.805 } 00:23:36.805 } 00:23:36.805 }, 00:23:36.805 { 00:23:36.805 "method": "nvmf_subsystem_add_listener", 00:23:36.805 "params": { 00:23:36.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.805 "listen_address": { 00:23:36.805 "trtype": "TCP", 00:23:36.805 "adrfam": "IPv4", 00:23:36.805 "traddr": "10.0.0.2", 00:23:36.805 "trsvcid": "4420" 00:23:36.805 }, 00:23:36.805 "secure_channel": false, 00:23:36.805 "sock_impl": "ssl" 00:23:36.805 } 00:23:36.805 } 00:23:36.805 ] 00:23:36.805 } 00:23:36.805 ] 00:23:36.805 }' 00:23:36.805 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:37.067 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:37.067 "subsystems": [ 00:23:37.067 { 00:23:37.067 "subsystem": "keyring", 00:23:37.067 "config": [ 00:23:37.067 { 00:23:37.067 "method": "keyring_file_add_key", 00:23:37.067 "params": { 00:23:37.067 "name": "key0", 00:23:37.067 "path": "/tmp/tmp.o7vytNf5Us" 00:23:37.067 } 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "iobuf", 00:23:37.067 "config": [ 00:23:37.067 { 00:23:37.067 "method": "iobuf_set_options", 00:23:37.067 "params": { 00:23:37.067 "small_pool_count": 8192, 00:23:37.067 "large_pool_count": 1024, 00:23:37.067 "small_bufsize": 8192, 00:23:37.067 "large_bufsize": 135168, 00:23:37.067 "enable_numa": false 00:23:37.067 } 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "sock", 00:23:37.067 "config": [ 00:23:37.067 { 00:23:37.067 "method": "sock_set_default_impl", 00:23:37.067 "params": { 00:23:37.067 "impl_name": "posix" 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "sock_impl_set_options", 00:23:37.067 "params": { 00:23:37.067 "impl_name": "ssl", 00:23:37.067 "recv_buf_size": 4096, 00:23:37.067 "send_buf_size": 4096, 00:23:37.067 "enable_recv_pipe": true, 00:23:37.067 "enable_quickack": false, 00:23:37.067 "enable_placement_id": 0, 00:23:37.067 "enable_zerocopy_send_server": true, 00:23:37.067 "enable_zerocopy_send_client": false, 00:23:37.067 "zerocopy_threshold": 0, 00:23:37.067 "tls_version": 0, 00:23:37.067 "enable_ktls": false 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "sock_impl_set_options", 00:23:37.067 "params": { 00:23:37.067 "impl_name": "posix", 00:23:37.067 "recv_buf_size": 2097152, 00:23:37.067 "send_buf_size": 2097152, 00:23:37.067 "enable_recv_pipe": true, 00:23:37.067 "enable_quickack": false, 00:23:37.067 "enable_placement_id": 0, 00:23:37.067 "enable_zerocopy_send_server": true, 00:23:37.067 "enable_zerocopy_send_client": false, 00:23:37.067 "zerocopy_threshold": 0, 00:23:37.067 "tls_version": 0, 00:23:37.067 "enable_ktls": false 00:23:37.067 } 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "vmd", 00:23:37.067 "config": [] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "accel", 00:23:37.067 "config": [ 00:23:37.067 { 00:23:37.067 "method": "accel_set_options", 00:23:37.067 "params": { 00:23:37.067 "small_cache_size": 128, 00:23:37.067 "large_cache_size": 16, 00:23:37.067 "task_count": 2048, 00:23:37.067 "sequence_count": 2048, 00:23:37.067 "buf_count": 2048 00:23:37.067 } 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "bdev", 00:23:37.067 "config": [ 00:23:37.067 { 00:23:37.067 "method": "bdev_set_options", 00:23:37.067 "params": { 00:23:37.067 "bdev_io_pool_size": 65535, 00:23:37.067 "bdev_io_cache_size": 256, 00:23:37.067 "bdev_auto_examine": true, 00:23:37.067 "iobuf_small_cache_size": 128, 00:23:37.067 "iobuf_large_cache_size": 16 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_raid_set_options", 00:23:37.067 "params": { 00:23:37.067 "process_window_size_kb": 1024, 00:23:37.067 "process_max_bandwidth_mb_sec": 0 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_iscsi_set_options", 00:23:37.067 "params": { 00:23:37.067 "timeout_sec": 30 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_nvme_set_options", 00:23:37.067 "params": { 00:23:37.067 "action_on_timeout": "none", 00:23:37.067 "timeout_us": 0, 00:23:37.067 "timeout_admin_us": 0, 00:23:37.067 "keep_alive_timeout_ms": 10000, 00:23:37.067 "arbitration_burst": 0, 00:23:37.067 "low_priority_weight": 0, 00:23:37.067 "medium_priority_weight": 0, 00:23:37.067 "high_priority_weight": 0, 00:23:37.067 "nvme_adminq_poll_period_us": 10000, 00:23:37.067 "nvme_ioq_poll_period_us": 0, 00:23:37.067 "io_queue_requests": 512, 00:23:37.067 "delay_cmd_submit": true, 00:23:37.067 "transport_retry_count": 4, 00:23:37.067 "bdev_retry_count": 3, 00:23:37.067 "transport_ack_timeout": 0, 00:23:37.067 "ctrlr_loss_timeout_sec": 0, 00:23:37.067 "reconnect_delay_sec": 0, 00:23:37.067 "fast_io_fail_timeout_sec": 0, 00:23:37.067 "disable_auto_failback": false, 00:23:37.067 "generate_uuids": false, 00:23:37.067 "transport_tos": 0, 00:23:37.067 "nvme_error_stat": false, 00:23:37.067 "rdma_srq_size": 0, 00:23:37.067 "io_path_stat": false, 00:23:37.067 "allow_accel_sequence": false, 00:23:37.067 "rdma_max_cq_size": 0, 00:23:37.067 "rdma_cm_event_timeout_ms": 0, 00:23:37.067 "dhchap_digests": [ 00:23:37.067 "sha256", 00:23:37.067 "sha384", 00:23:37.067 "sha512" 00:23:37.067 ], 00:23:37.067 "dhchap_dhgroups": [ 00:23:37.067 "null", 00:23:37.067 "ffdhe2048", 00:23:37.067 "ffdhe3072", 00:23:37.067 "ffdhe4096", 00:23:37.067 "ffdhe6144", 00:23:37.067 "ffdhe8192" 00:23:37.067 ] 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_nvme_attach_controller", 00:23:37.067 "params": { 00:23:37.067 "name": "nvme0", 00:23:37.067 "trtype": "TCP", 00:23:37.067 "adrfam": "IPv4", 00:23:37.067 "traddr": "10.0.0.2", 00:23:37.067 "trsvcid": "4420", 00:23:37.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.067 "prchk_reftag": false, 00:23:37.067 "prchk_guard": false, 00:23:37.067 "ctrlr_loss_timeout_sec": 0, 00:23:37.067 "reconnect_delay_sec": 0, 00:23:37.067 "fast_io_fail_timeout_sec": 0, 00:23:37.067 "psk": "key0", 00:23:37.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.067 "hdgst": false, 00:23:37.067 "ddgst": false, 00:23:37.067 "multipath": "multipath" 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_nvme_set_hotplug", 00:23:37.067 "params": { 00:23:37.067 "period_us": 100000, 00:23:37.067 "enable": false 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_enable_histogram", 00:23:37.067 "params": { 00:23:37.067 "name": "nvme0n1", 00:23:37.067 "enable": true 00:23:37.067 } 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "method": "bdev_wait_for_examine" 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }, 00:23:37.067 { 00:23:37.067 "subsystem": "nbd", 00:23:37.067 "config": [] 00:23:37.067 } 00:23:37.067 ] 00:23:37.067 }' 00:23:37.067 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1597066 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1597066 ']' 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1597066 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597066 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597066' 00:23:37.068 killing process with pid 1597066 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1597066 00:23:37.068 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.068 00:23:37.068 Latency(us) 00:23:37.068 [2024-12-09T04:16:51.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.068 [2024-12-09T04:16:51.065Z] =================================================================================================================== 00:23:37.068 [2024-12-09T04:16:51.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.068 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1597066 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1596814 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1596814 ']' 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1596814 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596814 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596814' 00:23:37.638 killing process with pid 1596814 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1596814 00:23:37.638 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1596814 00:23:38.209 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:38.209 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.209 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.209 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:38.209 "subsystems": [ 00:23:38.209 { 00:23:38.209 "subsystem": "keyring", 00:23:38.209 "config": [ 00:23:38.209 { 00:23:38.209 "method": "keyring_file_add_key", 00:23:38.209 "params": { 00:23:38.209 "name": "key0", 00:23:38.209 "path": "/tmp/tmp.o7vytNf5Us" 00:23:38.209 } 00:23:38.209 } 00:23:38.209 ] 00:23:38.209 }, 00:23:38.209 { 00:23:38.209 "subsystem": "iobuf", 00:23:38.209 "config": [ 00:23:38.209 { 00:23:38.209 "method": "iobuf_set_options", 00:23:38.209 "params": { 00:23:38.209 "small_pool_count": 8192, 00:23:38.209 "large_pool_count": 1024, 00:23:38.209 "small_bufsize": 8192, 00:23:38.209 "large_bufsize": 135168, 00:23:38.209 "enable_numa": false 00:23:38.209 } 00:23:38.209 } 00:23:38.209 ] 00:23:38.209 }, 00:23:38.209 { 00:23:38.209 "subsystem": "sock", 00:23:38.209 "config": [ 00:23:38.209 { 00:23:38.209 "method": "sock_set_default_impl", 00:23:38.209 "params": { 00:23:38.209 "impl_name": "posix" 00:23:38.209 } 00:23:38.209 }, 00:23:38.209 { 00:23:38.209 "method": "sock_impl_set_options", 00:23:38.210 "params": { 00:23:38.210 "impl_name": "ssl", 00:23:38.210 "recv_buf_size": 4096, 00:23:38.210 "send_buf_size": 4096, 00:23:38.210 "enable_recv_pipe": true, 00:23:38.210 "enable_quickack": false, 00:23:38.210 "enable_placement_id": 0, 00:23:38.210 "enable_zerocopy_send_server": true, 00:23:38.210 "enable_zerocopy_send_client": false, 00:23:38.210 "zerocopy_threshold": 0, 00:23:38.210 "tls_version": 0, 00:23:38.210 "enable_ktls": false 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "sock_impl_set_options", 00:23:38.210 "params": { 00:23:38.210 "impl_name": "posix", 00:23:38.210 "recv_buf_size": 2097152, 00:23:38.210 "send_buf_size": 2097152, 00:23:38.210 "enable_recv_pipe": true, 00:23:38.210 "enable_quickack": false, 00:23:38.210 "enable_placement_id": 0, 00:23:38.210 "enable_zerocopy_send_server": true, 00:23:38.210 "enable_zerocopy_send_client": false, 00:23:38.210 "zerocopy_threshold": 0, 00:23:38.210 "tls_version": 0, 00:23:38.210 "enable_ktls": false 00:23:38.210 } 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "vmd", 00:23:38.210 "config": [] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "accel", 00:23:38.210 "config": [ 00:23:38.210 { 00:23:38.210 "method": "accel_set_options", 00:23:38.210 "params": { 00:23:38.210 "small_cache_size": 128, 00:23:38.210 "large_cache_size": 16, 00:23:38.210 "task_count": 2048, 00:23:38.210 "sequence_count": 2048, 00:23:38.210 "buf_count": 2048 00:23:38.210 } 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "bdev", 00:23:38.210 "config": [ 00:23:38.210 { 00:23:38.210 "method": "bdev_set_options", 00:23:38.210 "params": { 00:23:38.210 "bdev_io_pool_size": 65535, 00:23:38.210 "bdev_io_cache_size": 256, 00:23:38.210 "bdev_auto_examine": true, 00:23:38.210 "iobuf_small_cache_size": 128, 00:23:38.210 "iobuf_large_cache_size": 16 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_raid_set_options", 00:23:38.210 "params": { 00:23:38.210 "process_window_size_kb": 1024, 00:23:38.210 "process_max_bandwidth_mb_sec": 0 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_iscsi_set_options", 00:23:38.210 "params": { 00:23:38.210 "timeout_sec": 30 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_nvme_set_options", 00:23:38.210 "params": { 00:23:38.210 "action_on_timeout": "none", 00:23:38.210 "timeout_us": 0, 00:23:38.210 "timeout_admin_us": 0, 00:23:38.210 "keep_alive_timeout_ms": 10000, 00:23:38.210 "arbitration_burst": 0, 00:23:38.210 "low_priority_weight": 0, 00:23:38.210 "medium_priority_weight": 0, 00:23:38.210 "high_priority_weight": 0, 00:23:38.210 "nvme_adminq_poll_period_us": 10000, 00:23:38.210 "nvme_ioq_poll_period_us": 0, 00:23:38.210 "io_queue_requests": 0, 00:23:38.210 "delay_cmd_submit": true, 00:23:38.210 "transport_retry_count": 4, 00:23:38.210 "bdev_retry_count": 3, 00:23:38.210 "transport_ack_timeout": 0, 00:23:38.210 "ctrlr_loss_timeout_sec": 0, 00:23:38.210 "reconnect_delay_sec": 0, 00:23:38.210 "fast_io_fail_timeout_sec": 0, 00:23:38.210 "disable_auto_failback": false, 00:23:38.210 "generate_uuids": false, 00:23:38.210 "transport_tos": 0, 00:23:38.210 "nvme_error_stat": false, 00:23:38.210 "rdma_srq_size": 0, 00:23:38.210 "io_path_stat": false, 00:23:38.210 "allow_accel_sequence": false, 00:23:38.210 "rdma_max_cq_size": 0, 00:23:38.210 "rdma_cm_event_timeout_ms": 0, 00:23:38.210 "dhchap_digests": [ 00:23:38.210 "sha256", 00:23:38.210 "sha384", 00:23:38.210 "sha512" 00:23:38.210 ], 00:23:38.210 "dhchap_dhgroups": [ 00:23:38.210 "null", 00:23:38.210 "ffdhe2048", 00:23:38.210 "ffdhe3072", 00:23:38.210 "ffdhe4096", 00:23:38.210 "ffdhe6144", 00:23:38.210 "ffdhe8192" 00:23:38.210 ] 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_nvme_set_hotplug", 00:23:38.210 "params": { 00:23:38.210 "period_us": 100000, 00:23:38.210 "enable": false 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_malloc_create", 00:23:38.210 "params": { 00:23:38.210 "name": "malloc0", 00:23:38.210 "num_blocks": 8192, 00:23:38.210 "block_size": 4096, 00:23:38.210 "physical_block_size": 4096, 00:23:38.210 "uuid": "ed3f87b4-306d-4a3d-9624-01b0bd065220", 00:23:38.210 "optimal_io_boundary": 0, 00:23:38.210 "md_size": 0, 00:23:38.210 "dif_type": 0, 00:23:38.210 "dif_is_head_of_md": false, 00:23:38.210 "dif_pi_format": 0 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "bdev_wait_for_examine" 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "nbd", 00:23:38.210 "config": [] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "scheduler", 00:23:38.210 "config": [ 00:23:38.210 { 00:23:38.210 "method": "framework_set_scheduler", 00:23:38.210 "params": { 00:23:38.210 "name": "static" 00:23:38.210 } 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "subsystem": "nvmf", 00:23:38.210 "config": [ 00:23:38.210 { 00:23:38.210 "method": "nvmf_set_config", 00:23:38.210 "params": { 00:23:38.210 "discovery_filter": "match_any", 00:23:38.210 "admin_cmd_passthru": { 00:23:38.210 "identify_ctrlr": false 00:23:38.210 }, 00:23:38.210 "dhchap_digests": [ 00:23:38.210 "sha256", 00:23:38.210 "sha384", 00:23:38.210 "sha512" 00:23:38.210 ], 00:23:38.210 "dhchap_dhgroups": [ 00:23:38.210 "null", 00:23:38.210 "ffdhe2048", 00:23:38.210 "ffdhe3072", 00:23:38.210 "ffdhe4096", 00:23:38.210 "ffdhe6144", 00:23:38.210 "ffdhe8192" 00:23:38.210 ] 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_set_max_subsystems", 00:23:38.210 "params": { 00:23:38.210 "max_subsystems": 1024 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_set_crdt", 00:23:38.210 "params": { 00:23:38.210 "crdt1": 0, 00:23:38.210 "crdt2": 0, 00:23:38.210 "crdt3": 0 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_create_transport", 00:23:38.210 "params": { 00:23:38.210 "trtype": "TCP", 00:23:38.210 "max_queue_depth": 128, 00:23:38.210 "max_io_qpairs_per_ctrlr": 127, 00:23:38.210 "in_capsule_data_size": 4096, 00:23:38.210 "max_io_size": 131072, 00:23:38.210 "io_unit_size": 131072, 00:23:38.210 "max_aq_depth": 128, 00:23:38.210 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.210 "num_shared_buffers": 511, 00:23:38.210 "buf_cache_size": 4294967295, 00:23:38.210 "dif_insert_or_strip": false, 00:23:38.210 "zcopy": false, 00:23:38.210 "c2h_success": false, 00:23:38.210 "sock_priority": 0, 00:23:38.210 "abort_timeout_sec": 1, 00:23:38.210 "ack_timeout": 0, 00:23:38.210 "data_wr_pool_size": 0 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_create_subsystem", 00:23:38.210 "params": { 00:23:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.210 "allow_any_host": false, 00:23:38.210 "serial_number": "00000000000000000000", 00:23:38.210 "model_number": "SPDK bdev Controller", 00:23:38.210 "max_namespaces": 32, 00:23:38.210 "min_cntlid": 1, 00:23:38.210 "max_cntlid": 65519, 00:23:38.210 "ana_reporting": false 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_subsystem_add_host", 00:23:38.210 "params": { 00:23:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.210 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.210 "psk": "key0" 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_subsystem_add_ns", 00:23:38.210 "params": { 00:23:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.210 "namespace": { 00:23:38.210 "nsid": 1, 00:23:38.210 "bdev_name": "malloc0", 00:23:38.210 "nguid": "ED3F87B4306D4A3D962401B0BD065220", 00:23:38.210 "uuid": "ed3f87b4-306d-4a3d-9624-01b0bd065220", 00:23:38.210 "no_auto_visible": false 00:23:38.210 } 00:23:38.210 } 00:23:38.210 }, 00:23:38.210 { 00:23:38.210 "method": "nvmf_subsystem_add_listener", 00:23:38.210 "params": { 00:23:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.210 "listen_address": { 00:23:38.210 "trtype": "TCP", 00:23:38.210 "adrfam": "IPv4", 00:23:38.210 "traddr": "10.0.0.2", 00:23:38.210 "trsvcid": "4420" 00:23:38.210 }, 00:23:38.210 "secure_channel": false, 00:23:38.210 "sock_impl": "ssl" 00:23:38.210 } 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 } 00:23:38.210 ] 00:23:38.210 }' 00:23:38.210 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1597765 00:23:38.210 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1597765 00:23:38.210 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1597765 ']' 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.211 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.211 [2024-12-09 05:16:52.123580] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:38.211 [2024-12-09 05:16:52.123698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.472 [2024-12-09 05:16:52.278265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.472 [2024-12-09 05:16:52.362037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.472 [2024-12-09 05:16:52.362079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.472 [2024-12-09 05:16:52.362088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.472 [2024-12-09 05:16:52.362099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.472 [2024-12-09 05:16:52.362109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.472 [2024-12-09 05:16:52.363078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.731 [2024-12-09 05:16:52.701692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.992 [2024-12-09 05:16:52.733729] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.992 [2024-12-09 05:16:52.733978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1598104 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1598104 /var/tmp/bdevperf.sock 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1598104 ']' 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.992 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:38.992 "subsystems": [ 00:23:38.992 { 00:23:38.992 "subsystem": "keyring", 00:23:38.992 "config": [ 00:23:38.992 { 00:23:38.992 "method": "keyring_file_add_key", 00:23:38.992 "params": { 00:23:38.992 "name": "key0", 00:23:38.992 "path": "/tmp/tmp.o7vytNf5Us" 00:23:38.992 } 00:23:38.992 } 00:23:38.992 ] 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "subsystem": "iobuf", 00:23:38.992 "config": [ 00:23:38.992 { 00:23:38.992 "method": "iobuf_set_options", 00:23:38.992 "params": { 00:23:38.992 "small_pool_count": 8192, 00:23:38.992 "large_pool_count": 1024, 00:23:38.992 "small_bufsize": 8192, 00:23:38.992 "large_bufsize": 135168, 00:23:38.992 "enable_numa": false 00:23:38.992 } 00:23:38.992 } 00:23:38.992 ] 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "subsystem": "sock", 00:23:38.992 "config": [ 00:23:38.992 { 00:23:38.992 "method": "sock_set_default_impl", 00:23:38.992 "params": { 00:23:38.992 "impl_name": "posix" 00:23:38.992 } 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "method": "sock_impl_set_options", 00:23:38.992 "params": { 00:23:38.992 "impl_name": "ssl", 00:23:38.992 "recv_buf_size": 4096, 00:23:38.992 "send_buf_size": 4096, 00:23:38.992 "enable_recv_pipe": true, 00:23:38.992 "enable_quickack": false, 00:23:38.992 "enable_placement_id": 0, 00:23:38.992 "enable_zerocopy_send_server": true, 00:23:38.992 "enable_zerocopy_send_client": false, 00:23:38.992 "zerocopy_threshold": 0, 00:23:38.992 "tls_version": 0, 00:23:38.992 "enable_ktls": false 00:23:38.992 } 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "method": "sock_impl_set_options", 00:23:38.992 "params": { 00:23:38.992 "impl_name": "posix", 00:23:38.992 "recv_buf_size": 2097152, 00:23:38.992 "send_buf_size": 2097152, 00:23:38.992 "enable_recv_pipe": true, 00:23:38.992 "enable_quickack": false, 00:23:38.992 "enable_placement_id": 0, 00:23:38.992 "enable_zerocopy_send_server": true, 00:23:38.992 "enable_zerocopy_send_client": false, 00:23:38.992 "zerocopy_threshold": 0, 00:23:38.992 "tls_version": 0, 00:23:38.992 "enable_ktls": false 00:23:38.992 } 00:23:38.992 } 00:23:38.992 ] 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "subsystem": "vmd", 00:23:38.992 "config": [] 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "subsystem": "accel", 00:23:38.992 "config": [ 00:23:38.992 { 00:23:38.992 "method": "accel_set_options", 00:23:38.992 "params": { 00:23:38.992 "small_cache_size": 128, 00:23:38.992 "large_cache_size": 16, 00:23:38.992 "task_count": 2048, 00:23:38.992 "sequence_count": 2048, 00:23:38.992 "buf_count": 2048 00:23:38.992 } 00:23:38.992 } 00:23:38.992 ] 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "subsystem": "bdev", 00:23:38.992 "config": [ 00:23:38.992 { 00:23:38.992 "method": "bdev_set_options", 00:23:38.992 "params": { 00:23:38.992 "bdev_io_pool_size": 65535, 00:23:38.992 "bdev_io_cache_size": 256, 00:23:38.992 "bdev_auto_examine": true, 00:23:38.992 "iobuf_small_cache_size": 128, 00:23:38.992 "iobuf_large_cache_size": 16 00:23:38.992 } 00:23:38.992 }, 00:23:38.992 { 00:23:38.992 "method": "bdev_raid_set_options", 00:23:38.992 "params": { 00:23:38.992 "process_window_size_kb": 1024, 00:23:38.992 "process_max_bandwidth_mb_sec": 0 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_iscsi_set_options", 00:23:38.993 "params": { 00:23:38.993 "timeout_sec": 30 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_nvme_set_options", 00:23:38.993 "params": { 00:23:38.993 "action_on_timeout": "none", 00:23:38.993 "timeout_us": 0, 00:23:38.993 "timeout_admin_us": 0, 00:23:38.993 "keep_alive_timeout_ms": 10000, 00:23:38.993 "arbitration_burst": 0, 00:23:38.993 "low_priority_weight": 0, 00:23:38.993 "medium_priority_weight": 0, 00:23:38.993 "high_priority_weight": 0, 00:23:38.993 "nvme_adminq_poll_period_us": 10000, 00:23:38.993 "nvme_ioq_poll_period_us": 0, 00:23:38.993 "io_queue_requests": 512, 00:23:38.993 "delay_cmd_submit": true, 00:23:38.993 "transport_retry_count": 4, 00:23:38.993 "bdev_retry_count": 3, 00:23:38.993 "transport_ack_timeout": 0, 00:23:38.993 "ctrlr_loss_timeout_sec": 0, 00:23:38.993 "reconnect_delay_sec": 0, 00:23:38.993 "fast_io_fail_timeout_sec": 0, 00:23:38.993 "disable_auto_failback": false, 00:23:38.993 "generate_uuids": false, 00:23:38.993 "transport_tos": 0, 00:23:38.993 "nvme_error_stat": false, 00:23:38.993 "rdma_srq_size": 0, 00:23:38.993 "io_path_stat": false, 00:23:38.993 "allow_accel_sequence": false, 00:23:38.993 "rdma_max_cq_size": 0, 00:23:38.993 "rdma_cm_event_timeout_ms": 0, 00:23:38.993 "dhchap_digests": [ 00:23:38.993 "sha256", 00:23:38.993 "sha384", 00:23:38.993 "sha512" 00:23:38.993 ], 00:23:38.993 "dhchap_dhgroups": [ 00:23:38.993 "null", 00:23:38.993 "ffdhe2048", 00:23:38.993 "ffdhe3072", 00:23:38.993 "ffdhe4096", 00:23:38.993 "ffdhe6144", 00:23:38.993 "ffdhe8192" 00:23:38.993 ] 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_nvme_attach_controller", 00:23:38.993 "params": { 00:23:38.993 "name": "nvme0", 00:23:38.993 "trtype": "TCP", 00:23:38.993 "adrfam": "IPv4", 00:23:38.993 "traddr": "10.0.0.2", 00:23:38.993 "trsvcid": "4420", 00:23:38.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.993 "prchk_reftag": false, 00:23:38.993 "prchk_guard": false, 00:23:38.993 "ctrlr_loss_timeout_sec": 0, 00:23:38.993 "reconnect_delay_sec": 0, 00:23:38.993 "fast_io_fail_timeout_sec": 0, 00:23:38.993 "psk": "key0", 00:23:38.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.993 "hdgst": false, 00:23:38.993 "ddgst": false, 00:23:38.993 "multipath": "multipath" 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_nvme_set_hotplug", 00:23:38.993 "params": { 00:23:38.993 "period_us": 100000, 00:23:38.993 "enable": false 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_enable_histogram", 00:23:38.993 "params": { 00:23:38.993 "name": "nvme0n1", 00:23:38.993 "enable": true 00:23:38.993 } 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "method": "bdev_wait_for_examine" 00:23:38.993 } 00:23:38.993 ] 00:23:38.993 }, 00:23:38.993 { 00:23:38.993 "subsystem": "nbd", 00:23:38.993 "config": [] 00:23:38.993 } 00:23:38.993 ] 00:23:38.993 }' 00:23:39.252 [2024-12-09 05:16:52.991178] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:39.252 [2024-12-09 05:16:52.991289] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598104 ] 00:23:39.252 [2024-12-09 05:16:53.120900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.252 [2024-12-09 05:16:53.194963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.513 [2024-12-09 05:16:53.457528] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.773 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.773 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.032 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.032 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:40.032 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.032 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.290 Running I/O for 1 seconds... 00:23:41.227 4945.00 IOPS, 19.32 MiB/s 00:23:41.227 Latency(us) 00:23:41.227 [2024-12-09T04:16:55.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.227 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:41.227 Verification LBA range: start 0x0 length 0x2000 00:23:41.227 nvme0n1 : 1.01 4999.60 19.53 0.00 0.00 25415.04 6280.53 63351.47 00:23:41.227 [2024-12-09T04:16:55.224Z] =================================================================================================================== 00:23:41.227 [2024-12-09T04:16:55.224Z] Total : 4999.60 19.53 0.00 0.00 25415.04 6280.53 63351.47 00:23:41.227 { 00:23:41.227 "results": [ 00:23:41.227 { 00:23:41.227 "job": "nvme0n1", 00:23:41.227 "core_mask": "0x2", 00:23:41.227 "workload": "verify", 00:23:41.227 "status": "finished", 00:23:41.227 "verify_range": { 00:23:41.227 "start": 0, 00:23:41.227 "length": 8192 00:23:41.227 }, 00:23:41.227 "queue_depth": 128, 00:23:41.227 "io_size": 4096, 00:23:41.227 "runtime": 1.014682, 00:23:41.227 "iops": 4999.59593251876, 00:23:41.227 "mibps": 19.529671611401405, 00:23:41.227 "io_failed": 0, 00:23:41.227 "io_timeout": 0, 00:23:41.227 "avg_latency_us": 25415.040799001246, 00:23:41.227 "min_latency_us": 6280.533333333334, 00:23:41.227 "max_latency_us": 63351.46666666667 00:23:41.227 } 00:23:41.227 ], 00:23:41.227 "core_count": 1 00:23:41.227 } 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:41.227 nvmf_trace.0 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1598104 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1598104 ']' 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1598104 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.227 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598104 00:23:41.487 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.487 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.487 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598104' 00:23:41.487 killing process with pid 1598104 00:23:41.487 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1598104 00:23:41.487 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.487 00:23:41.487 Latency(us) 00:23:41.487 [2024-12-09T04:16:55.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.487 [2024-12-09T04:16:55.484Z] =================================================================================================================== 00:23:41.487 [2024-12-09T04:16:55.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.487 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1598104 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.746 rmmod nvme_tcp 00:23:41.746 rmmod nvme_fabrics 00:23:41.746 rmmod nvme_keyring 00:23:41.746 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1597765 ']' 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1597765 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1597765 ']' 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1597765 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597765 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597765' 00:23:42.005 killing process with pid 1597765 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1597765 00:23:42.005 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1597765 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.575 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.AU58FkesBU /tmp/tmp.PsZvmEw32O /tmp/tmp.o7vytNf5Us 00:23:45.116 00:23:45.116 real 1m37.457s 00:23:45.116 user 2m33.522s 00:23:45.116 sys 0m27.353s 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.116 ************************************ 00:23:45.116 END TEST nvmf_tls 00:23:45.116 ************************************ 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:45.116 ************************************ 00:23:45.116 START TEST nvmf_fips 00:23:45.116 ************************************ 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:45.116 * Looking for test storage... 00:23:45.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.116 --rc genhtml_branch_coverage=1 00:23:45.116 --rc genhtml_function_coverage=1 00:23:45.116 --rc genhtml_legend=1 00:23:45.116 --rc geninfo_all_blocks=1 00:23:45.116 --rc geninfo_unexecuted_blocks=1 00:23:45.116 00:23:45.116 ' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.116 --rc genhtml_branch_coverage=1 00:23:45.116 --rc genhtml_function_coverage=1 00:23:45.116 --rc genhtml_legend=1 00:23:45.116 --rc geninfo_all_blocks=1 00:23:45.116 --rc geninfo_unexecuted_blocks=1 00:23:45.116 00:23:45.116 ' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.116 --rc genhtml_branch_coverage=1 00:23:45.116 --rc genhtml_function_coverage=1 00:23:45.116 --rc genhtml_legend=1 00:23:45.116 --rc geninfo_all_blocks=1 00:23:45.116 --rc geninfo_unexecuted_blocks=1 00:23:45.116 00:23:45.116 ' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.116 --rc genhtml_branch_coverage=1 00:23:45.116 --rc genhtml_function_coverage=1 00:23:45.116 --rc genhtml_legend=1 00:23:45.116 --rc geninfo_all_blocks=1 00:23:45.116 --rc geninfo_unexecuted_blocks=1 00:23:45.116 00:23:45.116 ' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.116 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:45.117 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:45.117 Error setting digest 00:23:45.117 4032661D757F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:45.117 4032661D757F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:45.117 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.118 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.251 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.251 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.251 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:53.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:53.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:53.252 Found net devices under 0000:31:00.0: cvl_0_0 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:53.252 Found net devices under 0000:31:00.1: cvl_0_1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:23:53.252 00:23:53.252 --- 10.0.0.2 ping statistics --- 00:23:53.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.252 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:53.252 00:23:53.252 --- 10.0.0.1 ping statistics --- 00:23:53.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.252 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.252 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1602919 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1602919 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1602919 ']' 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.253 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.253 [2024-12-09 05:17:06.723246] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:53.253 [2024-12-09 05:17:06.723378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.253 [2024-12-09 05:17:06.893486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.253 [2024-12-09 05:17:07.017036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.253 [2024-12-09 05:17:07.017104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.253 [2024-12-09 05:17:07.017121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.253 [2024-12-09 05:17:07.017135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.253 [2024-12-09 05:17:07.017145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.253 [2024-12-09 05:17:07.018681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:53.532 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ybT 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ybT 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ybT 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ybT 00:23:53.793 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.793 [2024-12-09 05:17:07.700904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.793 [2024-12-09 05:17:07.716901] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.793 [2024-12-09 05:17:07.717273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.793 malloc0 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1603203 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1603203 /var/tmp/bdevperf.sock 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1603203 ']' 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.055 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.055 [2024-12-09 05:17:07.936943] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:54.055 [2024-12-09 05:17:07.937076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603203 ] 00:23:54.316 [2024-12-09 05:17:08.094842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.316 [2024-12-09 05:17:08.220020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.890 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.890 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:54.890 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ybT 00:23:55.152 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.152 [2024-12-09 05:17:09.057494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.412 TLSTESTn1 00:23:55.412 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.412 Running I/O for 10 seconds... 00:23:57.301 1322.00 IOPS, 5.16 MiB/s [2024-12-09T04:17:12.684Z] 1326.00 IOPS, 5.18 MiB/s [2024-12-09T04:17:13.675Z] 1895.67 IOPS, 7.40 MiB/s [2024-12-09T04:17:14.615Z] 1896.25 IOPS, 7.41 MiB/s [2024-12-09T04:17:15.554Z] 1886.20 IOPS, 7.37 MiB/s [2024-12-09T04:17:16.493Z] 1945.83 IOPS, 7.60 MiB/s [2024-12-09T04:17:17.432Z] 2175.86 IOPS, 8.50 MiB/s [2024-12-09T04:17:18.370Z] 2111.88 IOPS, 8.25 MiB/s [2024-12-09T04:17:19.310Z] 2119.33 IOPS, 8.28 MiB/s [2024-12-09T04:17:19.571Z] 2097.00 IOPS, 8.19 MiB/s 00:24:05.574 Latency(us) 00:24:05.574 [2024-12-09T04:17:19.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.574 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.574 Verification LBA range: start 0x0 length 0x2000 00:24:05.574 TLSTESTn1 : 10.05 2098.27 8.20 0.00 0.00 60881.95 6963.20 215831.89 00:24:05.574 [2024-12-09T04:17:19.571Z] =================================================================================================================== 00:24:05.574 [2024-12-09T04:17:19.571Z] Total : 2098.27 8.20 0.00 0.00 60881.95 6963.20 215831.89 00:24:05.574 { 00:24:05.574 "results": [ 00:24:05.574 { 00:24:05.574 "job": "TLSTESTn1", 00:24:05.574 "core_mask": "0x4", 00:24:05.574 "workload": "verify", 00:24:05.574 "status": "finished", 00:24:05.574 "verify_range": { 00:24:05.574 "start": 0, 00:24:05.574 "length": 8192 00:24:05.574 }, 00:24:05.574 "queue_depth": 128, 00:24:05.574 "io_size": 4096, 00:24:05.574 "runtime": 10.054961, 00:24:05.574 "iops": 2098.2677108344824, 00:24:05.574 "mibps": 8.196358245447197, 00:24:05.574 "io_failed": 0, 00:24:05.574 "io_timeout": 0, 00:24:05.574 "avg_latency_us": 60881.95348943027, 00:24:05.574 "min_latency_us": 6963.2, 00:24:05.574 "max_latency_us": 215831.89333333334 00:24:05.574 } 00:24:05.574 ], 00:24:05.574 "core_count": 1 00:24:05.574 } 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.574 nvmf_trace.0 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1603203 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1603203 ']' 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1603203 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1603203 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1603203' 00:24:05.574 killing process with pid 1603203 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1603203 00:24:05.574 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.574 00:24:05.574 Latency(us) 00:24:05.574 [2024-12-09T04:17:19.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.574 [2024-12-09T04:17:19.571Z] =================================================================================================================== 00:24:05.574 [2024-12-09T04:17:19.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.574 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1603203 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.143 rmmod nvme_tcp 00:24:06.143 rmmod nvme_fabrics 00:24:06.143 rmmod nvme_keyring 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1602919 ']' 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1602919 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1602919 ']' 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1602919 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.143 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1602919 00:24:06.403 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.403 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.403 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1602919' 00:24:06.403 killing process with pid 1602919 00:24:06.403 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1602919 00:24:06.403 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1602919 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.975 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.885 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.886 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ybT 00:24:08.886 00:24:08.886 real 0m24.282s 00:24:08.886 user 0m26.686s 00:24:08.886 sys 0m9.376s 00:24:08.886 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.886 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.886 ************************************ 00:24:08.886 END TEST nvmf_fips 00:24:08.886 ************************************ 00:24:09.146 05:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:09.146 05:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.146 05:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.146 05:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.146 ************************************ 00:24:09.146 START TEST nvmf_control_msg_list 00:24:09.146 ************************************ 00:24:09.146 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:09.146 * Looking for test storage... 00:24:09.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:09.146 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.146 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.146 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.448 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.449 --rc genhtml_branch_coverage=1 00:24:09.449 --rc genhtml_function_coverage=1 00:24:09.449 --rc genhtml_legend=1 00:24:09.449 --rc geninfo_all_blocks=1 00:24:09.449 --rc geninfo_unexecuted_blocks=1 00:24:09.449 00:24:09.449 ' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.449 --rc genhtml_branch_coverage=1 00:24:09.449 --rc genhtml_function_coverage=1 00:24:09.449 --rc genhtml_legend=1 00:24:09.449 --rc geninfo_all_blocks=1 00:24:09.449 --rc geninfo_unexecuted_blocks=1 00:24:09.449 00:24:09.449 ' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.449 --rc genhtml_branch_coverage=1 00:24:09.449 --rc genhtml_function_coverage=1 00:24:09.449 --rc genhtml_legend=1 00:24:09.449 --rc geninfo_all_blocks=1 00:24:09.449 --rc geninfo_unexecuted_blocks=1 00:24:09.449 00:24:09.449 ' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.449 --rc genhtml_branch_coverage=1 00:24:09.449 --rc genhtml_function_coverage=1 00:24:09.449 --rc genhtml_legend=1 00:24:09.449 --rc geninfo_all_blocks=1 00:24:09.449 --rc geninfo_unexecuted_blocks=1 00:24:09.449 00:24:09.449 ' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.449 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.450 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:17.582 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:17.582 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:17.582 Found net devices under 0000:31:00.0: cvl_0_0 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:17.582 Found net devices under 0000:31:00.1: cvl_0_1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.582 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:24:17.583 00:24:17.583 --- 10.0.0.2 ping statistics --- 00:24:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.583 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:24:17.583 00:24:17.583 --- 10.0.0.1 ping statistics --- 00:24:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.583 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1609915 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1609915 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1609915 ']' 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.583 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 [2024-12-09 05:17:30.900296] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:17.583 [2024-12-09 05:17:30.900413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.583 [2024-12-09 05:17:31.065580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.583 [2024-12-09 05:17:31.188810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.583 [2024-12-09 05:17:31.188894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.583 [2024-12-09 05:17:31.188908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.583 [2024-12-09 05:17:31.188922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.583 [2024-12-09 05:17:31.188937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.583 [2024-12-09 05:17:31.190464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.842 [2024-12-09 05:17:31.741943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.842 Malloc0 00:24:17.842 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.843 [2024-12-09 05:17:31.817537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1610104 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1610106 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1610108 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1610104 00:24:17.843 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:18.102 [2024-12-09 05:17:31.979984] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:18.102 [2024-12-09 05:17:31.980550] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:18.102 [2024-12-09 05:17:31.981091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:19.484 Initializing NVMe Controllers 00:24:19.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:19.484 Initialization complete. Launching workers. 00:24:19.484 ======================================================== 00:24:19.484 Latency(us) 00:24:19.484 Device Information : IOPS MiB/s Average min max 00:24:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40910.68 40803.06 41150.58 00:24:19.484 ======================================================== 00:24:19.484 Total : 25.00 0.10 40910.68 40803.06 41150.58 00:24:19.484 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1610106 00:24:19.484 Initializing NVMe Controllers 00:24:19.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:19.484 Initialization complete. Launching workers. 00:24:19.484 ======================================================== 00:24:19.484 Latency(us) 00:24:19.484 Device Information : IOPS MiB/s Average min max 00:24:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2094.00 8.18 477.32 185.97 853.79 00:24:19.484 ======================================================== 00:24:19.484 Total : 2094.00 8.18 477.32 185.97 853.79 00:24:19.484 00:24:19.484 Initializing NVMe Controllers 00:24:19.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:19.484 Initialization complete. Launching workers. 00:24:19.484 ======================================================== 00:24:19.484 Latency(us) 00:24:19.484 Device Information : IOPS MiB/s Average min max 00:24:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40906.71 40803.46 41038.05 00:24:19.484 ======================================================== 00:24:19.484 Total : 25.00 0.10 40906.71 40803.46 41038.05 00:24:19.484 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1610108 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:19.484 rmmod nvme_tcp 00:24:19.484 rmmod nvme_fabrics 00:24:19.484 rmmod nvme_keyring 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1609915 ']' 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1609915 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1609915 ']' 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1609915 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609915 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609915' 00:24:19.484 killing process with pid 1609915 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1609915 00:24:19.484 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1609915 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.425 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.969 00:24:22.969 real 0m13.527s 00:24:22.969 user 0m9.291s 00:24:22.969 sys 0m6.858s 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.969 ************************************ 00:24:22.969 END TEST nvmf_control_msg_list 00:24:22.969 ************************************ 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:22.969 ************************************ 00:24:22.969 START TEST nvmf_wait_for_buf 00:24:22.969 ************************************ 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:22.969 * Looking for test storage... 00:24:22.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.969 --rc genhtml_branch_coverage=1 00:24:22.969 --rc genhtml_function_coverage=1 00:24:22.969 --rc genhtml_legend=1 00:24:22.969 --rc geninfo_all_blocks=1 00:24:22.969 --rc geninfo_unexecuted_blocks=1 00:24:22.969 00:24:22.969 ' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.969 --rc genhtml_branch_coverage=1 00:24:22.969 --rc genhtml_function_coverage=1 00:24:22.969 --rc genhtml_legend=1 00:24:22.969 --rc geninfo_all_blocks=1 00:24:22.969 --rc geninfo_unexecuted_blocks=1 00:24:22.969 00:24:22.969 ' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.969 --rc genhtml_branch_coverage=1 00:24:22.969 --rc genhtml_function_coverage=1 00:24:22.969 --rc genhtml_legend=1 00:24:22.969 --rc geninfo_all_blocks=1 00:24:22.969 --rc geninfo_unexecuted_blocks=1 00:24:22.969 00:24:22.969 ' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.969 --rc genhtml_branch_coverage=1 00:24:22.969 --rc genhtml_function_coverage=1 00:24:22.969 --rc genhtml_legend=1 00:24:22.969 --rc geninfo_all_blocks=1 00:24:22.969 --rc geninfo_unexecuted_blocks=1 00:24:22.969 00:24:22.969 ' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.969 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.970 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.109 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.109 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.109 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:31.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:31.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:31.110 Found net devices under 0000:31:00.0: cvl_0_0 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:31.110 Found net devices under 0000:31:00.1: cvl_0_1 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.110 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.110 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:24:31.110 00:24:31.111 --- 10.0.0.2 ping statistics --- 00:24:31.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.111 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:24:31.111 00:24:31.111 --- 10.0.0.1 ping statistics --- 00:24:31.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.111 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1614720 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1614720 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1614720 ']' 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.111 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.111 [2024-12-09 05:17:44.471656] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:31.111 [2024-12-09 05:17:44.471794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.111 [2024-12-09 05:17:44.634991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.111 [2024-12-09 05:17:44.757414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.111 [2024-12-09 05:17:44.757485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.111 [2024-12-09 05:17:44.757499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.111 [2024-12-09 05:17:44.757512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.111 [2024-12-09 05:17:44.757525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.111 [2024-12-09 05:17:44.759046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.371 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.631 Malloc0 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.631 [2024-12-09 05:17:45.592641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.631 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.891 [2024-12-09 05:17:45.629057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.891 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.891 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.891 [2024-12-09 05:17:45.779989] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:33.275 Initializing NVMe Controllers 00:24:33.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:33.275 Initialization complete. Launching workers. 00:24:33.275 ======================================================== 00:24:33.275 Latency(us) 00:24:33.275 Device Information : IOPS MiB/s Average min max 00:24:33.275 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.92 16.11 32133.14 7923.56 63862.59 00:24:33.275 ======================================================== 00:24:33.275 Total : 128.92 16.11 32133.14 7923.56 63862.59 00:24:33.275 00:24:33.275 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:33.275 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:33.275 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.275 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:33.275 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.536 rmmod nvme_tcp 00:24:33.536 rmmod nvme_fabrics 00:24:33.536 rmmod nvme_keyring 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1614720 ']' 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1614720 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1614720 ']' 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1614720 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1614720 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1614720' 00:24:33.536 killing process with pid 1614720 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1614720 00:24:33.536 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1614720 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.481 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.397 00:24:36.397 real 0m13.754s 00:24:36.397 user 0m6.015s 00:24:36.397 sys 0m6.346s 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:36.397 ************************************ 00:24:36.397 END TEST nvmf_wait_for_buf 00:24:36.397 ************************************ 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.397 05:17:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.659 ************************************ 00:24:36.659 START TEST nvmf_fuzz 00:24:36.659 ************************************ 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:36.659 * Looking for test storage... 00:24:36.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.659 --rc genhtml_branch_coverage=1 00:24:36.659 --rc genhtml_function_coverage=1 00:24:36.659 --rc genhtml_legend=1 00:24:36.659 --rc geninfo_all_blocks=1 00:24:36.659 --rc geninfo_unexecuted_blocks=1 00:24:36.659 00:24:36.659 ' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.659 --rc genhtml_branch_coverage=1 00:24:36.659 --rc genhtml_function_coverage=1 00:24:36.659 --rc genhtml_legend=1 00:24:36.659 --rc geninfo_all_blocks=1 00:24:36.659 --rc geninfo_unexecuted_blocks=1 00:24:36.659 00:24:36.659 ' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.659 --rc genhtml_branch_coverage=1 00:24:36.659 --rc genhtml_function_coverage=1 00:24:36.659 --rc genhtml_legend=1 00:24:36.659 --rc geninfo_all_blocks=1 00:24:36.659 --rc geninfo_unexecuted_blocks=1 00:24:36.659 00:24:36.659 ' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.659 --rc genhtml_branch_coverage=1 00:24:36.659 --rc genhtml_function_coverage=1 00:24:36.659 --rc genhtml_legend=1 00:24:36.659 --rc geninfo_all_blocks=1 00:24:36.659 --rc geninfo_unexecuted_blocks=1 00:24:36.659 00:24:36.659 ' 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.659 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.660 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.804 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:44.805 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:44.805 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:44.805 Found net devices under 0000:31:00.0: cvl_0_0 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:44.805 Found net devices under 0000:31:00.1: cvl_0_1 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.805 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:24:44.805 00:24:44.805 --- 10.0.0.2 ping statistics --- 00:24:44.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.805 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:24:44.805 00:24:44.805 --- 10.0.0.1 ping statistics --- 00:24:44.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.805 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1619694 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1619694 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1619694 ']' 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.805 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.376 Malloc0 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.376 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:45.377 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:17.624 Fuzzing completed. Shutting down the fuzz application 00:25:17.624 00:25:17.624 Dumping successful admin opcodes: 00:25:17.624 9, 10, 00:25:17.624 Dumping successful io opcodes: 00:25:17.624 0, 9, 00:25:17.624 NS: 0x2000008efec0 I/O qp, Total commands completed: 838729, total successful commands: 4872, random_seed: 3107433088 00:25:17.624 NS: 0x2000008efec0 admin qp, Total commands completed: 83696, total successful commands: 19, random_seed: 2600250752 00:25:17.625 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:17.625 Fuzzing completed. Shutting down the fuzz application 00:25:17.625 00:25:17.625 Dumping successful admin opcodes: 00:25:17.625 00:25:17.625 Dumping successful io opcodes: 00:25:17.625 00:25:17.625 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1904450013 00:25:17.625 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1904547981 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.625 rmmod nvme_tcp 00:25:17.625 rmmod nvme_fabrics 00:25:17.625 rmmod nvme_keyring 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1619694 ']' 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1619694 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1619694 ']' 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1619694 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:17.625 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1619694 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1619694' 00:25:17.884 killing process with pid 1619694 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1619694 00:25:17.884 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1619694 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.456 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:20.999 00:25:20.999 real 0m44.064s 00:25:20.999 user 0m58.788s 00:25:20.999 sys 0m15.413s 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.999 ************************************ 00:25:20.999 END TEST nvmf_fuzz 00:25:20.999 ************************************ 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.999 ************************************ 00:25:20.999 START TEST nvmf_multiconnection 00:25:20.999 ************************************ 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:20.999 * Looking for test storage... 00:25:20.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:20.999 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.000 --rc genhtml_branch_coverage=1 00:25:21.000 --rc genhtml_function_coverage=1 00:25:21.000 --rc genhtml_legend=1 00:25:21.000 --rc geninfo_all_blocks=1 00:25:21.000 --rc geninfo_unexecuted_blocks=1 00:25:21.000 00:25:21.000 ' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.000 --rc genhtml_branch_coverage=1 00:25:21.000 --rc genhtml_function_coverage=1 00:25:21.000 --rc genhtml_legend=1 00:25:21.000 --rc geninfo_all_blocks=1 00:25:21.000 --rc geninfo_unexecuted_blocks=1 00:25:21.000 00:25:21.000 ' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.000 --rc genhtml_branch_coverage=1 00:25:21.000 --rc genhtml_function_coverage=1 00:25:21.000 --rc genhtml_legend=1 00:25:21.000 --rc geninfo_all_blocks=1 00:25:21.000 --rc geninfo_unexecuted_blocks=1 00:25:21.000 00:25:21.000 ' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.000 --rc genhtml_branch_coverage=1 00:25:21.000 --rc genhtml_function_coverage=1 00:25:21.000 --rc genhtml_legend=1 00:25:21.000 --rc geninfo_all_blocks=1 00:25:21.000 --rc geninfo_unexecuted_blocks=1 00:25:21.000 00:25:21.000 ' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.000 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.001 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.150 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:29.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:29.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.150 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:29.151 Found net devices under 0000:31:00.0: cvl_0_0 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:29.151 Found net devices under 0000:31:00.1: cvl_0_1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:25:29.151 00:25:29.151 --- 10.0.0.2 ping statistics --- 00:25:29.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.151 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:25:29.151 00:25:29.151 --- 10.0.0.1 ping statistics --- 00:25:29.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.151 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1630972 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1630972 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1630972 ']' 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.151 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.151 [2024-12-09 05:18:42.481913] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:29.151 [2024-12-09 05:18:42.482050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.151 [2024-12-09 05:18:42.646347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.151 [2024-12-09 05:18:42.776757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.151 [2024-12-09 05:18:42.776838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.151 [2024-12-09 05:18:42.776852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.151 [2024-12-09 05:18:42.776865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.151 [2024-12-09 05:18:42.776875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.151 [2024-12-09 05:18:42.779691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.151 [2024-12-09 05:18:42.779852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.151 [2024-12-09 05:18:42.779960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.151 [2024-12-09 05:18:42.780094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.413 [2024-12-09 05:18:43.320603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.413 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 Malloc1 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 [2024-12-09 05:18:43.449508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 Malloc2 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 Malloc3 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.675 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 Malloc4 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 Malloc5 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.937 Malloc6 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.937 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.198 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 Malloc7 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 Malloc8 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.199 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 Malloc9 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 Malloc10 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 Malloc11 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.722 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:32.124 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:32.124 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.124 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.124 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.124 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.034 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.034 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.034 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:34.034 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.035 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.035 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.035 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.035 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:35.945 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:35.945 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.945 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.945 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.945 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.856 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:39.768 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:39.768 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.768 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.768 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.768 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.679 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:43.058 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:43.058 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:43.058 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.058 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:43.058 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.961 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:46.868 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:46.868 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.868 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.868 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.868 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.779 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:50.692 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:50.692 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:50.692 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.692 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:50.692 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.605 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:54.512 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:54.512 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.512 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.512 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.512 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.424 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:58.336 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:58.336 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.336 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.336 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.336 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.251 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:02.165 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:02.165 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.165 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.165 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.165 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.077 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:06.014 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:06.014 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.014 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.014 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.014 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.927 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:09.842 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:09.842 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.842 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.842 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.842 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:11.754 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:11.754 [global] 00:26:11.754 thread=1 00:26:11.754 invalidate=1 00:26:11.754 rw=read 00:26:11.754 time_based=1 00:26:11.754 runtime=10 00:26:11.754 ioengine=libaio 00:26:11.754 direct=1 00:26:11.754 bs=262144 00:26:11.754 iodepth=64 00:26:11.754 norandommap=1 00:26:11.754 numjobs=1 00:26:11.754 00:26:11.754 [job0] 00:26:11.754 filename=/dev/nvme0n1 00:26:11.754 [job1] 00:26:11.754 filename=/dev/nvme10n1 00:26:11.754 [job2] 00:26:11.754 filename=/dev/nvme1n1 00:26:11.754 [job3] 00:26:11.754 filename=/dev/nvme2n1 00:26:11.754 [job4] 00:26:11.754 filename=/dev/nvme3n1 00:26:11.754 [job5] 00:26:11.754 filename=/dev/nvme4n1 00:26:11.754 [job6] 00:26:11.754 filename=/dev/nvme5n1 00:26:11.754 [job7] 00:26:11.754 filename=/dev/nvme6n1 00:26:11.754 [job8] 00:26:11.754 filename=/dev/nvme7n1 00:26:11.754 [job9] 00:26:11.754 filename=/dev/nvme8n1 00:26:11.754 [job10] 00:26:11.754 filename=/dev/nvme9n1 00:26:11.754 Could not set queue depth (nvme0n1) 00:26:11.754 Could not set queue depth (nvme10n1) 00:26:11.754 Could not set queue depth (nvme1n1) 00:26:11.754 Could not set queue depth (nvme2n1) 00:26:11.754 Could not set queue depth (nvme3n1) 00:26:11.754 Could not set queue depth (nvme4n1) 00:26:11.754 Could not set queue depth (nvme5n1) 00:26:11.754 Could not set queue depth (nvme6n1) 00:26:11.754 Could not set queue depth (nvme7n1) 00:26:11.754 Could not set queue depth (nvme8n1) 00:26:11.754 Could not set queue depth (nvme9n1) 00:26:12.013 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.013 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.014 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.014 fio-3.35 00:26:12.014 Starting 11 threads 00:26:24.232 00:26:24.232 job0: (groupid=0, jobs=1): err= 0: pid=1639546: Mon Dec 9 05:19:36 2024 00:26:24.232 read: IOPS=409, BW=102MiB/s (107MB/s)(1042MiB/10180msec) 00:26:24.232 slat (usec): min=9, max=205042, avg=2100.94, stdev=10933.46 00:26:24.232 clat (msec): min=3, max=960, avg=154.06, stdev=210.82 00:26:24.233 lat (msec): min=3, max=960, avg=156.16, stdev=213.90 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:26:24.233 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 48], 00:26:24.233 | 70.00th=[ 85], 80.00th=[ 253], 90.00th=[ 575], 95.00th=[ 667], 00:26:24.233 | 99.00th=[ 835], 99.50th=[ 885], 99.90th=[ 961], 99.95th=[ 961], 00:26:24.233 | 99.99th=[ 961] 00:26:24.233 bw ( KiB/s): min= 9728, max=408576, per=12.11%, avg=105075.20, stdev=133056.98, samples=20 00:26:24.233 iops : min= 38, max= 1596, avg=410.45, stdev=519.75, samples=20 00:26:24.233 lat (msec) : 4=0.05%, 10=0.36%, 20=0.70%, 50=60.79%, 100=8.88% 00:26:24.233 lat (msec) : 250=9.19%, 500=8.45%, 750=9.41%, 1000=2.18% 00:26:24.233 cpu : usr=0.16%, sys=1.44%, ctx=669, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=4167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job1: (groupid=0, jobs=1): err= 0: pid=1639567: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=267, BW=67.0MiB/s (70.2MB/s)(681MiB/10175msec) 00:26:24.233 slat (usec): min=13, max=463269, avg=3414.13, stdev=18042.06 00:26:24.233 clat (usec): min=1636, max=1157.5k, avg=235178.74, stdev=243041.48 00:26:24.233 lat (usec): min=1680, max=1157.5k, avg=238592.87, stdev=246608.74 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 70], 20.00th=[ 72], 00:26:24.233 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 190], 00:26:24.233 | 70.00th=[ 259], 80.00th=[ 393], 90.00th=[ 684], 95.00th=[ 776], 00:26:24.233 | 99.00th=[ 869], 99.50th=[ 894], 99.90th=[ 961], 99.95th=[ 961], 00:26:24.233 | 99.99th=[ 1150] 00:26:24.233 bw ( KiB/s): min= 6144, max=219136, per=7.85%, avg=68124.95, stdev=70427.69, samples=20 00:26:24.233 iops : min= 24, max= 856, avg=266.10, stdev=275.11, samples=20 00:26:24.233 lat (msec) : 2=0.15%, 4=0.70%, 10=1.50%, 20=2.72%, 50=1.47% 00:26:24.233 lat (msec) : 100=47.30%, 250=15.16%, 500=12.81%, 750=12.66%, 1000=5.50% 00:26:24.233 lat (msec) : 2000=0.04% 00:26:24.233 cpu : usr=0.19%, sys=1.13%, ctx=759, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=2725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job2: (groupid=0, jobs=1): err= 0: pid=1639583: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=619, BW=155MiB/s (162MB/s)(1553MiB/10033msec) 00:26:24.233 slat (usec): min=6, max=111848, avg=1429.86, stdev=6315.40 00:26:24.233 clat (usec): min=1595, max=712150, avg=101829.35, stdev=96534.93 00:26:24.233 lat (usec): min=1640, max=712177, avg=103259.21, stdev=97793.70 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 29], 00:26:24.233 | 30.00th=[ 38], 40.00th=[ 45], 50.00th=[ 68], 60.00th=[ 89], 00:26:24.233 | 70.00th=[ 126], 80.00th=[ 169], 90.00th=[ 224], 95.00th=[ 279], 00:26:24.233 | 99.00th=[ 456], 99.50th=[ 472], 99.90th=[ 676], 99.95th=[ 676], 00:26:24.233 | 99.99th=[ 709] 00:26:24.233 bw ( KiB/s): min=35328, max=462336, per=18.15%, avg=157426.50, stdev=122324.96, samples=20 00:26:24.233 iops : min= 138, max= 1806, avg=614.90, stdev=477.85, samples=20 00:26:24.233 lat (msec) : 2=0.02%, 4=1.16%, 10=1.61%, 20=0.60%, 50=40.82% 00:26:24.233 lat (msec) : 100=18.48%, 250=30.22%, 500=6.83%, 750=0.27% 00:26:24.233 cpu : usr=0.11%, sys=1.98%, ctx=1233, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=6212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job3: (groupid=0, jobs=1): err= 0: pid=1639587: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=342, BW=85.7MiB/s (89.9MB/s)(861MiB/10037msec) 00:26:24.233 slat (usec): min=12, max=365168, avg=2457.58, stdev=10430.09 00:26:24.233 clat (msec): min=15, max=817, avg=183.90, stdev=122.03 00:26:24.233 lat (msec): min=15, max=966, avg=186.36, stdev=123.31 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 83], 00:26:24.233 | 30.00th=[ 118], 40.00th=[ 153], 50.00th=[ 176], 60.00th=[ 192], 00:26:24.233 | 70.00th=[ 213], 80.00th=[ 243], 90.00th=[ 334], 95.00th=[ 401], 00:26:24.233 | 99.00th=[ 718], 99.50th=[ 776], 99.90th=[ 802], 99.95th=[ 818], 00:26:24.233 | 99.99th=[ 818] 00:26:24.233 bw ( KiB/s): min=12800, max=210944, per=9.97%, avg=86502.40, stdev=48440.46, samples=20 00:26:24.233 iops : min= 50, max= 824, avg=337.90, stdev=189.22, samples=20 00:26:24.233 lat (msec) : 20=0.55%, 50=10.92%, 100=14.21%, 250=55.64%, 500=16.47% 00:26:24.233 lat (msec) : 750=1.54%, 1000=0.67% 00:26:24.233 cpu : usr=0.13%, sys=1.08%, ctx=636, majf=0, minf=3534 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=3442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job4: (groupid=0, jobs=1): err= 0: pid=1639594: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=326, BW=81.6MiB/s (85.6MB/s)(831MiB/10178msec) 00:26:24.233 slat (usec): min=12, max=420758, avg=2212.91, stdev=14513.31 00:26:24.233 clat (msec): min=17, max=959, avg=193.45, stdev=212.17 00:26:24.233 lat (msec): min=18, max=1231, avg=195.67, stdev=214.69 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 65], 00:26:24.233 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 117], 00:26:24.233 | 70.00th=[ 142], 80.00th=[ 275], 90.00th=[ 558], 95.00th=[ 726], 00:26:24.233 | 99.00th=[ 936], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 961], 00:26:24.233 | 99.99th=[ 961] 00:26:24.233 bw ( KiB/s): min= 7168, max=201216, per=9.62%, avg=83430.40, stdev=66415.67, samples=20 00:26:24.233 iops : min= 28, max= 786, avg=325.90, stdev=259.44, samples=20 00:26:24.233 lat (msec) : 20=0.12%, 50=11.80%, 100=28.20%, 250=38.61%, 500=10.02% 00:26:24.233 lat (msec) : 750=7.43%, 1000=3.82% 00:26:24.233 cpu : usr=0.08%, sys=1.11%, ctx=638, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=3323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job5: (groupid=0, jobs=1): err= 0: pid=1639616: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=116, BW=29.1MiB/s (30.5MB/s)(295MiB/10163msec) 00:26:24.233 slat (usec): min=13, max=416408, avg=7412.65, stdev=26569.92 00:26:24.233 clat (msec): min=147, max=978, avg=542.85, stdev=187.21 00:26:24.233 lat (msec): min=190, max=1006, avg=550.26, stdev=189.83 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 215], 5.00th=[ 245], 10.00th=[ 264], 20.00th=[ 363], 00:26:24.233 | 30.00th=[ 401], 40.00th=[ 481], 50.00th=[ 567], 60.00th=[ 625], 00:26:24.233 | 70.00th=[ 667], 80.00th=[ 709], 90.00th=[ 776], 95.00th=[ 835], 00:26:24.233 | 99.00th=[ 911], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 978], 00:26:24.233 | 99.99th=[ 978] 00:26:24.233 bw ( KiB/s): min= 9216, max=69632, per=3.30%, avg=28595.20, stdev=12887.84, samples=20 00:26:24.233 iops : min= 36, max= 272, avg=111.70, stdev=50.34, samples=20 00:26:24.233 lat (msec) : 250=7.20%, 500=34.46%, 750=43.95%, 1000=14.39% 00:26:24.233 cpu : usr=0.04%, sys=0.43%, ctx=202, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=1181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job6: (groupid=0, jobs=1): err= 0: pid=1639626: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=404, BW=101MiB/s (106MB/s)(1031MiB/10187msec) 00:26:24.233 slat (usec): min=12, max=368413, avg=2422.31, stdev=11923.43 00:26:24.233 clat (msec): min=18, max=930, avg=155.42, stdev=140.94 00:26:24.233 lat (msec): min=20, max=930, avg=157.84, stdev=143.01 00:26:24.233 clat percentiles (msec): 00:26:24.233 | 1.00th=[ 58], 5.00th=[ 69], 10.00th=[ 74], 20.00th=[ 84], 00:26:24.233 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 114], 00:26:24.233 | 70.00th=[ 123], 80.00th=[ 169], 90.00th=[ 342], 95.00th=[ 550], 00:26:24.233 | 99.00th=[ 709], 99.50th=[ 751], 99.90th=[ 810], 99.95th=[ 827], 00:26:24.233 | 99.99th=[ 927] 00:26:24.233 bw ( KiB/s): min=18432, max=214528, per=11.98%, avg=103910.40, stdev=65924.21, samples=20 00:26:24.233 iops : min= 72, max= 838, avg=405.90, stdev=257.52, samples=20 00:26:24.233 lat (msec) : 20=0.02%, 50=0.27%, 100=42.44%, 250=44.36%, 500=6.89% 00:26:24.233 lat (msec) : 750=5.43%, 1000=0.58% 00:26:24.233 cpu : usr=0.09%, sys=1.23%, ctx=737, majf=0, minf=4097 00:26:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.233 issued rwts: total=4123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.233 job7: (groupid=0, jobs=1): err= 0: pid=1639635: Mon Dec 9 05:19:36 2024 00:26:24.233 read: IOPS=164, BW=41.0MiB/s (43.0MB/s)(418MiB/10176msec) 00:26:24.233 slat (usec): min=12, max=301589, avg=5188.40, stdev=18999.82 00:26:24.233 clat (msec): min=22, max=865, avg=384.08, stdev=216.93 00:26:24.233 lat (msec): min=22, max=1011, avg=389.27, stdev=220.02 00:26:24.233 clat percentiles (msec): 00:26:24.234 | 1.00th=[ 51], 5.00th=[ 120], 10.00th=[ 161], 20.00th=[ 209], 00:26:24.234 | 30.00th=[ 234], 40.00th=[ 262], 50.00th=[ 292], 60.00th=[ 384], 00:26:24.234 | 70.00th=[ 468], 80.00th=[ 642], 90.00th=[ 726], 95.00th=[ 785], 00:26:24.234 | 99.00th=[ 835], 99.50th=[ 835], 99.90th=[ 860], 99.95th=[ 869], 00:26:24.234 | 99.99th=[ 869] 00:26:24.234 bw ( KiB/s): min=19456, max=80896, per=4.74%, avg=41121.20, stdev=20360.77, samples=20 00:26:24.234 iops : min= 76, max= 316, avg=160.60, stdev=79.48, samples=20 00:26:24.234 lat (msec) : 50=0.78%, 100=2.75%, 250=32.40%, 500=34.55%, 750=21.56% 00:26:24.234 lat (msec) : 1000=7.96% 00:26:24.234 cpu : usr=0.06%, sys=0.59%, ctx=330, majf=0, minf=4097 00:26:24.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:24.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.234 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.234 job8: (groupid=0, jobs=1): err= 0: pid=1639661: Mon Dec 9 05:19:36 2024 00:26:24.234 read: IOPS=411, BW=103MiB/s (108MB/s)(1030MiB/10016msec) 00:26:24.234 slat (usec): min=8, max=283254, avg=2154.95, stdev=9004.86 00:26:24.234 clat (msec): min=3, max=514, avg=153.21, stdev=95.99 00:26:24.234 lat (msec): min=3, max=597, avg=155.36, stdev=97.14 00:26:24.234 clat percentiles (msec): 00:26:24.234 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 56], 20.00th=[ 74], 00:26:24.234 | 30.00th=[ 92], 40.00th=[ 110], 50.00th=[ 127], 60.00th=[ 169], 00:26:24.234 | 70.00th=[ 194], 80.00th=[ 211], 90.00th=[ 271], 95.00th=[ 351], 00:26:24.234 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:26:24.234 | 99.99th=[ 514] 00:26:24.234 bw ( KiB/s): min=40960, max=215040, per=11.97%, avg=103884.80, stdev=49294.36, samples=20 00:26:24.234 iops : min= 160, max= 840, avg=405.80, stdev=192.56, samples=20 00:26:24.234 lat (msec) : 4=0.05%, 10=0.63%, 20=0.83%, 50=7.33%, 100=25.58% 00:26:24.234 lat (msec) : 250=54.11%, 500=10.65%, 750=0.83% 00:26:24.234 cpu : usr=0.09%, sys=1.37%, ctx=876, majf=0, minf=4097 00:26:24.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:24.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.234 issued rwts: total=4121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.234 job9: (groupid=0, jobs=1): err= 0: pid=1639674: Mon Dec 9 05:19:36 2024 00:26:24.234 read: IOPS=170, BW=42.6MiB/s (44.6MB/s)(433MiB/10176msec) 00:26:24.234 slat (usec): min=12, max=198046, avg=3926.13, stdev=16742.39 00:26:24.234 clat (msec): min=14, max=861, avg=371.44, stdev=253.45 00:26:24.234 lat (msec): min=16, max=862, avg=375.36, stdev=256.52 00:26:24.234 clat percentiles (msec): 00:26:24.234 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 111], 00:26:24.234 | 30.00th=[ 171], 40.00th=[ 249], 50.00th=[ 334], 60.00th=[ 401], 00:26:24.234 | 70.00th=[ 600], 80.00th=[ 659], 90.00th=[ 718], 95.00th=[ 760], 00:26:24.234 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 852], 99.95th=[ 860], 00:26:24.234 | 99.99th=[ 860] 00:26:24.234 bw ( KiB/s): min=14848, max=145408, per=4.92%, avg=42700.80, stdev=32439.79, samples=20 00:26:24.234 iops : min= 58, max= 568, avg=166.80, stdev=126.72, samples=20 00:26:24.234 lat (msec) : 20=0.40%, 50=13.11%, 100=4.45%, 250=22.23%, 500=22.86% 00:26:24.234 lat (msec) : 750=31.24%, 1000=5.72% 00:26:24.234 cpu : usr=0.10%, sys=0.61%, ctx=391, majf=0, minf=4097 00:26:24.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:24.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.234 issued rwts: total=1732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.234 job10: (groupid=0, jobs=1): err= 0: pid=1639683: Mon Dec 9 05:19:36 2024 00:26:24.234 read: IOPS=179, BW=44.9MiB/s (47.1MB/s)(456MiB/10163msec) 00:26:24.234 slat (usec): min=11, max=521384, avg=4738.97, stdev=23088.32 00:26:24.234 clat (msec): min=15, max=1156, avg=351.20, stdev=260.66 00:26:24.234 lat (msec): min=15, max=1156, avg=355.94, stdev=263.52 00:26:24.234 clat percentiles (msec): 00:26:24.234 | 1.00th=[ 25], 5.00th=[ 44], 10.00th=[ 59], 20.00th=[ 96], 00:26:24.234 | 30.00th=[ 205], 40.00th=[ 243], 50.00th=[ 279], 60.00th=[ 338], 00:26:24.234 | 70.00th=[ 418], 80.00th=[ 634], 90.00th=[ 760], 95.00th=[ 793], 00:26:24.234 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1150], 99.95th=[ 1150], 00:26:24.234 | 99.99th=[ 1150] 00:26:24.234 bw ( KiB/s): min= 9728, max=167936, per=5.20%, avg=45081.60, stdev=34509.30, samples=20 00:26:24.234 iops : min= 38, max= 656, avg=176.10, stdev=134.80, samples=20 00:26:24.234 lat (msec) : 20=0.55%, 50=6.41%, 100=13.42%, 250=22.63%, 500=32.77% 00:26:24.234 lat (msec) : 750=13.81%, 1000=8.00%, 2000=2.41% 00:26:24.234 cpu : usr=0.05%, sys=0.64%, ctx=364, majf=0, minf=4097 00:26:24.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:24.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.234 issued rwts: total=1825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.234 00:26:24.234 Run status group 0 (all jobs): 00:26:24.234 READ: bw=847MiB/s (888MB/s), 29.1MiB/s-155MiB/s (30.5MB/s-162MB/s), io=8630MiB (9049MB), run=10016-10187msec 00:26:24.234 00:26:24.234 Disk stats (read/write): 00:26:24.234 nvme0n1: ios=8228/0, merge=0/0, ticks=1247956/0, in_queue=1247956, util=96.57% 00:26:24.234 nvme10n1: ios=5380/0, merge=0/0, ticks=1237238/0, in_queue=1237238, util=96.86% 00:26:24.234 nvme1n1: ios=11916/0, merge=0/0, ticks=1223884/0, in_queue=1223884, util=97.06% 00:26:24.234 nvme2n1: ios=6518/0, merge=0/0, ticks=1227812/0, in_queue=1227812, util=97.40% 00:26:24.234 nvme3n1: ios=6551/0, merge=0/0, ticks=1236243/0, in_queue=1236243, util=97.56% 00:26:24.234 nvme4n1: ios=2259/0, merge=0/0, ticks=1219705/0, in_queue=1219705, util=97.88% 00:26:24.234 nvme5n1: ios=8137/0, merge=0/0, ticks=1231358/0, in_queue=1231358, util=98.18% 00:26:24.234 nvme6n1: ios=3249/0, merge=0/0, ticks=1218520/0, in_queue=1218520, util=98.35% 00:26:24.234 nvme7n1: ios=7687/0, merge=0/0, ticks=1227357/0, in_queue=1227357, util=98.78% 00:26:24.234 nvme8n1: ios=3387/0, merge=0/0, ticks=1244775/0, in_queue=1244775, util=99.05% 00:26:24.234 nvme9n1: ios=3615/0, merge=0/0, ticks=1252082/0, in_queue=1252082, util=99.19% 00:26:24.234 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:24.234 [global] 00:26:24.234 thread=1 00:26:24.234 invalidate=1 00:26:24.234 rw=randwrite 00:26:24.234 time_based=1 00:26:24.234 runtime=10 00:26:24.234 ioengine=libaio 00:26:24.234 direct=1 00:26:24.234 bs=262144 00:26:24.234 iodepth=64 00:26:24.234 norandommap=1 00:26:24.234 numjobs=1 00:26:24.234 00:26:24.234 [job0] 00:26:24.234 filename=/dev/nvme0n1 00:26:24.234 [job1] 00:26:24.234 filename=/dev/nvme10n1 00:26:24.234 [job2] 00:26:24.234 filename=/dev/nvme1n1 00:26:24.234 [job3] 00:26:24.234 filename=/dev/nvme2n1 00:26:24.234 [job4] 00:26:24.234 filename=/dev/nvme3n1 00:26:24.234 [job5] 00:26:24.234 filename=/dev/nvme4n1 00:26:24.234 [job6] 00:26:24.234 filename=/dev/nvme5n1 00:26:24.234 [job7] 00:26:24.234 filename=/dev/nvme6n1 00:26:24.234 [job8] 00:26:24.234 filename=/dev/nvme7n1 00:26:24.234 [job9] 00:26:24.234 filename=/dev/nvme8n1 00:26:24.234 [job10] 00:26:24.234 filename=/dev/nvme9n1 00:26:24.234 Could not set queue depth (nvme0n1) 00:26:24.234 Could not set queue depth (nvme10n1) 00:26:24.234 Could not set queue depth (nvme1n1) 00:26:24.234 Could not set queue depth (nvme2n1) 00:26:24.234 Could not set queue depth (nvme3n1) 00:26:24.234 Could not set queue depth (nvme4n1) 00:26:24.234 Could not set queue depth (nvme5n1) 00:26:24.234 Could not set queue depth (nvme6n1) 00:26:24.234 Could not set queue depth (nvme7n1) 00:26:24.234 Could not set queue depth (nvme8n1) 00:26:24.234 Could not set queue depth (nvme9n1) 00:26:24.234 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.234 fio-3.35 00:26:24.234 Starting 11 threads 00:26:34.240 00:26:34.240 job0: (groupid=0, jobs=1): err= 0: pid=1640817: Mon Dec 9 05:19:47 2024 00:26:34.240 write: IOPS=561, BW=140MiB/s (147MB/s)(1420MiB/10115msec); 0 zone resets 00:26:34.240 slat (usec): min=23, max=25885, avg=1718.28, stdev=3512.27 00:26:34.240 clat (msec): min=20, max=272, avg=112.19, stdev=58.18 00:26:34.240 lat (msec): min=20, max=272, avg=113.91, stdev=58.95 00:26:34.240 clat percentiles (msec): 00:26:34.240 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:26:34.240 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 99], 60.00th=[ 144], 00:26:34.240 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 205], 00:26:34.240 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:26:34.240 | 99.99th=[ 271] 00:26:34.240 bw ( KiB/s): min=79872, max=296960, per=12.30%, avg=143829.45, stdev=75579.49, samples=20 00:26:34.240 iops : min= 312, max= 1160, avg=561.80, stdev=295.26, samples=20 00:26:34.240 lat (msec) : 50=1.85%, 100=48.67%, 250=48.71%, 500=0.77% 00:26:34.240 cpu : usr=1.35%, sys=1.77%, ctx=1425, majf=0, minf=1 00:26:34.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:34.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.240 issued rwts: total=0,5681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.240 job1: (groupid=0, jobs=1): err= 0: pid=1640836: Mon Dec 9 05:19:47 2024 00:26:34.240 write: IOPS=354, BW=88.7MiB/s (93.0MB/s)(899MiB/10134msec); 0 zone resets 00:26:34.240 slat (usec): min=24, max=70351, avg=2359.84, stdev=5257.92 00:26:34.240 clat (msec): min=7, max=370, avg=177.89, stdev=73.18 00:26:34.240 lat (msec): min=8, max=372, avg=180.25, stdev=74.10 00:26:34.240 clat percentiles (msec): 00:26:34.240 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 59], 20.00th=[ 116], 00:26:34.240 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 205], 00:26:34.240 | 70.00th=[ 213], 80.00th=[ 232], 90.00th=[ 253], 95.00th=[ 279], 00:26:34.240 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:26:34.240 | 99.99th=[ 372] 00:26:34.240 bw ( KiB/s): min=61440, max=154624, per=7.74%, avg=90444.80, stdev=25323.80, samples=20 00:26:34.240 iops : min= 240, max= 604, avg=353.30, stdev=98.92, samples=20 00:26:34.240 lat (msec) : 10=0.22%, 20=2.75%, 50=5.81%, 100=7.81%, 250=72.75% 00:26:34.240 lat (msec) : 500=10.65% 00:26:34.240 cpu : usr=0.95%, sys=1.05%, ctx=1525, majf=0, minf=1 00:26:34.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:34.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.240 issued rwts: total=0,3596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.240 job2: (groupid=0, jobs=1): err= 0: pid=1640848: Mon Dec 9 05:19:47 2024 00:26:34.240 write: IOPS=356, BW=89.2MiB/s (93.6MB/s)(904MiB/10134msec); 0 zone resets 00:26:34.240 slat (usec): min=22, max=46693, avg=2648.83, stdev=5047.93 00:26:34.240 clat (msec): min=19, max=310, avg=176.61, stdev=53.15 00:26:34.240 lat (msec): min=19, max=310, avg=179.26, stdev=53.77 00:26:34.240 clat percentiles (msec): 00:26:34.240 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 90], 20.00th=[ 130], 00:26:34.240 | 30.00th=[ 163], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:26:34.240 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 234], 95.00th=[ 255], 00:26:34.240 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 313], 00:26:34.240 | 99.99th=[ 313] 00:26:34.240 bw ( KiB/s): min=59392, max=187904, per=7.78%, avg=90991.05, stdev=28659.59, samples=20 00:26:34.240 iops : min= 232, max= 734, avg=355.40, stdev=111.96, samples=20 00:26:34.240 lat (msec) : 20=0.11%, 50=0.33%, 100=11.81%, 250=82.25%, 500=5.50% 00:26:34.240 cpu : usr=0.73%, sys=0.96%, ctx=1017, majf=0, minf=1 00:26:34.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:34.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.240 issued rwts: total=0,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.240 job3: (groupid=0, jobs=1): err= 0: pid=1640851: Mon Dec 9 05:19:47 2024 00:26:34.240 write: IOPS=378, BW=94.5MiB/s (99.1MB/s)(957MiB/10117msec); 0 zone resets 00:26:34.240 slat (usec): min=22, max=141316, avg=2302.54, stdev=5395.52 00:26:34.240 clat (msec): min=5, max=489, avg=166.85, stdev=56.30 00:26:34.240 lat (msec): min=5, max=489, avg=169.15, stdev=56.84 00:26:34.240 clat percentiles (msec): 00:26:34.240 | 1.00th=[ 52], 5.00th=[ 82], 10.00th=[ 102], 20.00th=[ 130], 00:26:34.240 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:26:34.240 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 234], 95.00th=[ 266], 00:26:34.240 | 99.00th=[ 355], 99.50th=[ 426], 99.90th=[ 485], 99.95th=[ 489], 00:26:34.240 | 99.99th=[ 489] 00:26:34.240 bw ( KiB/s): min=70144, max=133632, per=8.24%, avg=96332.80, stdev=19376.61, samples=20 00:26:34.240 iops : min= 274, max= 522, avg=376.30, stdev=75.69, samples=20 00:26:34.240 lat (msec) : 10=0.03%, 20=0.05%, 50=0.84%, 100=8.78%, 250=83.48% 00:26:34.240 lat (msec) : 500=6.82% 00:26:34.240 cpu : usr=1.00%, sys=1.20%, ctx=1389, majf=0, minf=1 00:26:34.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:34.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.240 issued rwts: total=0,3826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.240 job4: (groupid=0, jobs=1): err= 0: pid=1640856: Mon Dec 9 05:19:47 2024 00:26:34.240 write: IOPS=388, BW=97.1MiB/s (102MB/s)(983MiB/10117msec); 0 zone resets 00:26:34.240 slat (usec): min=23, max=91645, avg=2083.49, stdev=5271.77 00:26:34.240 clat (msec): min=5, max=425, avg=162.58, stdev=66.97 00:26:34.240 lat (msec): min=5, max=430, avg=164.66, stdev=67.75 00:26:34.240 clat percentiles (msec): 00:26:34.240 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 73], 20.00th=[ 127], 00:26:34.240 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:26:34.240 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 232], 95.00th=[ 279], 00:26:34.240 | 99.00th=[ 372], 99.50th=[ 414], 99.90th=[ 426], 99.95th=[ 426], 00:26:34.240 | 99.99th=[ 426] 00:26:34.240 bw ( KiB/s): min=52736, max=182784, per=8.47%, avg=98995.20, stdev=26327.10, samples=20 00:26:34.240 iops : min= 206, max= 714, avg=386.70, stdev=102.84, samples=20 00:26:34.240 lat (msec) : 10=0.36%, 20=2.04%, 50=4.30%, 100=8.45%, 250=76.95% 00:26:34.240 lat (msec) : 500=7.91% 00:26:34.240 cpu : usr=1.02%, sys=1.37%, ctx=1665, majf=0, minf=1 00:26:34.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:34.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,3930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job5: (groupid=0, jobs=1): err= 0: pid=1640883: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(753MiB/10137msec); 0 zone resets 00:26:34.241 slat (usec): min=24, max=49858, avg=3231.39, stdev=6036.10 00:26:34.241 clat (msec): min=21, max=377, avg=212.22, stdev=55.57 00:26:34.241 lat (msec): min=21, max=377, avg=215.45, stdev=56.12 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 71], 5.00th=[ 120], 10.00th=[ 167], 20.00th=[ 182], 00:26:34.241 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 211], 00:26:34.241 | 70.00th=[ 228], 80.00th=[ 245], 90.00th=[ 288], 95.00th=[ 326], 00:26:34.241 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 380], 00:26:34.241 | 99.99th=[ 380] 00:26:34.241 bw ( KiB/s): min=43008, max=102400, per=6.45%, avg=75443.20, stdev=14646.64, samples=20 00:26:34.241 iops : min= 168, max= 400, avg=294.70, stdev=57.21, samples=20 00:26:34.241 lat (msec) : 50=0.50%, 100=2.13%, 250=78.50%, 500=18.87% 00:26:34.241 cpu : usr=0.70%, sys=0.76%, ctx=820, majf=0, minf=1 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,3010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job6: (groupid=0, jobs=1): err= 0: pid=1640894: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=353, BW=88.3MiB/s (92.6MB/s)(895MiB/10136msec); 0 zone resets 00:26:34.241 slat (usec): min=24, max=36832, avg=2741.40, stdev=5058.40 00:26:34.241 clat (msec): min=16, max=307, avg=178.40, stdev=52.45 00:26:34.241 lat (msec): min=16, max=307, avg=181.14, stdev=52.96 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 66], 5.00th=[ 70], 10.00th=[ 91], 20.00th=[ 138], 00:26:34.241 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 197], 00:26:34.241 | 70.00th=[ 207], 80.00th=[ 215], 90.00th=[ 234], 95.00th=[ 259], 00:26:34.241 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 309], 00:26:34.241 | 99.99th=[ 309] 00:26:34.241 bw ( KiB/s): min=60416, max=188416, per=7.70%, avg=90009.60, stdev=28238.82, samples=20 00:26:34.241 iops : min= 236, max= 736, avg=351.60, stdev=110.31, samples=20 00:26:34.241 lat (msec) : 20=0.11%, 50=0.34%, 100=11.34%, 250=82.63%, 500=5.59% 00:26:34.241 cpu : usr=0.90%, sys=1.07%, ctx=916, majf=0, minf=1 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,3580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job7: (groupid=0, jobs=1): err= 0: pid=1640903: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=396, BW=99.2MiB/s (104MB/s)(1004MiB/10122msec); 0 zone resets 00:26:34.241 slat (usec): min=24, max=37176, avg=2259.49, stdev=4384.59 00:26:34.241 clat (msec): min=22, max=312, avg=159.03, stdev=36.19 00:26:34.241 lat (msec): min=22, max=313, avg=161.29, stdev=36.57 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 59], 5.00th=[ 110], 10.00th=[ 125], 20.00th=[ 130], 00:26:34.241 | 30.00th=[ 136], 40.00th=[ 142], 50.00th=[ 153], 60.00th=[ 171], 00:26:34.241 | 70.00th=[ 190], 80.00th=[ 197], 90.00th=[ 201], 95.00th=[ 207], 00:26:34.241 | 99.00th=[ 218], 99.50th=[ 253], 99.90th=[ 300], 99.95th=[ 300], 00:26:34.241 | 99.99th=[ 313] 00:26:34.241 bw ( KiB/s): min=79872, max=147456, per=8.65%, avg=101171.20, stdev=20260.14, samples=20 00:26:34.241 iops : min= 312, max= 576, avg=395.20, stdev=79.14, samples=20 00:26:34.241 lat (msec) : 50=0.75%, 100=3.39%, 250=95.32%, 500=0.55% 00:26:34.241 cpu : usr=0.85%, sys=1.14%, ctx=1318, majf=0, minf=1 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,4015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job8: (groupid=0, jobs=1): err= 0: pid=1640931: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=371, BW=93.0MiB/s (97.5MB/s)(941MiB/10123msec); 0 zone resets 00:26:34.241 slat (usec): min=25, max=81145, avg=2611.35, stdev=5406.22 00:26:34.241 clat (msec): min=5, max=376, avg=169.36, stdev=73.24 00:26:34.241 lat (msec): min=5, max=376, avg=171.97, stdev=74.17 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 64], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 75], 00:26:34.241 | 30.00th=[ 125], 40.00th=[ 159], 50.00th=[ 186], 60.00th=[ 194], 00:26:34.241 | 70.00th=[ 199], 80.00th=[ 205], 90.00th=[ 255], 95.00th=[ 313], 00:26:34.241 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:26:34.241 | 99.99th=[ 376] 00:26:34.241 bw ( KiB/s): min=45056, max=224768, per=8.11%, avg=94781.35, stdev=46942.36, samples=20 00:26:34.241 iops : min= 176, max= 878, avg=370.20, stdev=183.36, samples=20 00:26:34.241 lat (msec) : 10=0.03%, 50=0.03%, 100=22.05%, 250=66.99%, 500=10.92% 00:26:34.241 cpu : usr=0.87%, sys=1.21%, ctx=986, majf=0, minf=1 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,3765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job9: (groupid=0, jobs=1): err= 0: pid=1640943: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=684, BW=171MiB/s (180MB/s)(1724MiB/10065msec); 0 zone resets 00:26:34.241 slat (usec): min=21, max=32263, avg=1292.68, stdev=3290.57 00:26:34.241 clat (msec): min=2, max=381, avg=92.12, stdev=70.06 00:26:34.241 lat (msec): min=2, max=381, avg=93.41, stdev=71.07 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 55], 00:26:34.241 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 71], 60.00th=[ 74], 00:26:34.241 | 70.00th=[ 79], 80.00th=[ 114], 90.00th=[ 182], 95.00th=[ 262], 00:26:34.241 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 380], 00:26:34.241 | 99.99th=[ 380] 00:26:34.241 bw ( KiB/s): min=43008, max=294400, per=14.96%, avg=174884.45, stdev=92779.75, samples=20 00:26:34.241 iops : min= 168, max= 1150, avg=683.10, stdev=362.45, samples=20 00:26:34.241 lat (msec) : 4=0.09%, 10=1.15%, 20=1.65%, 50=7.86%, 100=65.62% 00:26:34.241 lat (msec) : 250=17.87%, 500=5.76% 00:26:34.241 cpu : usr=1.39%, sys=2.21%, ctx=2551, majf=0, minf=1 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,6894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 job10: (groupid=0, jobs=1): err= 0: pid=1640953: Mon Dec 9 05:19:47 2024 00:26:34.241 write: IOPS=433, BW=108MiB/s (114MB/s)(1097MiB/10124msec); 0 zone resets 00:26:34.241 slat (usec): min=16, max=82748, avg=2259.56, stdev=4363.14 00:26:34.241 clat (msec): min=2, max=308, avg=145.38, stdev=47.47 00:26:34.241 lat (msec): min=6, max=308, avg=147.64, stdev=48.06 00:26:34.241 clat percentiles (msec): 00:26:34.241 | 1.00th=[ 26], 5.00th=[ 54], 10.00th=[ 73], 20.00th=[ 123], 00:26:34.241 | 30.00th=[ 129], 40.00th=[ 136], 50.00th=[ 142], 60.00th=[ 155], 00:26:34.241 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 205], 00:26:34.241 | 99.00th=[ 218], 99.50th=[ 249], 99.90th=[ 296], 99.95th=[ 296], 00:26:34.241 | 99.99th=[ 309] 00:26:34.241 bw ( KiB/s): min=78336, max=256000, per=9.46%, avg=110668.80, stdev=40553.53, samples=20 00:26:34.241 iops : min= 306, max= 1000, avg=432.30, stdev=158.41, samples=20 00:26:34.241 lat (msec) : 4=0.02%, 10=0.27%, 20=0.48%, 50=1.41%, 100=13.75% 00:26:34.241 lat (msec) : 250=83.65%, 500=0.41% 00:26:34.241 cpu : usr=1.09%, sys=1.35%, ctx=1094, majf=0, minf=2 00:26:34.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:34.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.241 issued rwts: total=0,4386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.241 00:26:34.241 Run status group 0 (all jobs): 00:26:34.241 WRITE: bw=1142MiB/s (1197MB/s), 74.2MiB/s-171MiB/s (77.8MB/s-180MB/s), io=11.3GiB (12.1GB), run=10065-10137msec 00:26:34.241 00:26:34.241 Disk stats (read/write): 00:26:34.241 nvme0n1: ios=49/11334, merge=0/0, ticks=247/1228665, in_queue=1228912, util=98.46% 00:26:34.241 nvme10n1: ios=46/7148, merge=0/0, ticks=1412/1231784, in_queue=1233196, util=99.91% 00:26:34.241 nvme1n1: ios=20/7188, merge=0/0, ticks=63/1228953, in_queue=1229016, util=97.17% 00:26:34.241 nvme2n1: ios=46/7621, merge=0/0, ticks=1392/1230713, in_queue=1232105, util=99.90% 00:26:34.241 nvme3n1: ios=49/7828, merge=0/0, ticks=2868/1228410, in_queue=1231278, util=99.96% 00:26:34.241 nvme4n1: ios=0/5975, merge=0/0, ticks=0/1228513, in_queue=1228513, util=97.83% 00:26:34.241 nvme5n1: ios=0/7112, merge=0/0, ticks=0/1226649, in_queue=1226649, util=98.01% 00:26:34.241 nvme6n1: ios=0/7993, merge=0/0, ticks=0/1231624, in_queue=1231624, util=98.17% 00:26:34.241 nvme7n1: ios=42/7492, merge=0/0, ticks=3447/1225264, in_queue=1228711, util=99.95% 00:26:34.241 nvme8n1: ios=0/13405, merge=0/0, ticks=0/1205053, in_queue=1205053, util=98.86% 00:26:34.241 nvme9n1: ios=40/8731, merge=0/0, ticks=1216/1219241, in_queue=1220457, util=99.97% 00:26:34.241 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:34.241 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:34.241 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.241 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:34.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.501 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:35.071 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.071 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:35.331 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:35.331 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:35.331 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.331 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.591 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.592 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.592 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.592 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:35.851 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.851 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.852 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:36.422 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.422 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:36.682 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.682 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:37.253 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.253 05:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:37.512 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.512 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:37.772 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.772 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:38.032 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:38.032 05:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:38.293 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.293 rmmod nvme_tcp 00:26:38.293 rmmod nvme_fabrics 00:26:38.293 rmmod nvme_keyring 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1630972 ']' 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1630972 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1630972 ']' 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1630972 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.293 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1630972 00:26:38.554 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.554 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.554 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1630972' 00:26:38.554 killing process with pid 1630972 00:26:38.554 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1630972 00:26:38.554 05:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1630972 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.938 05:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.484 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.484 00:26:42.484 real 1m21.418s 00:26:42.484 user 5m4.823s 00:26:42.484 sys 0m17.207s 00:26:42.484 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.484 05:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.484 ************************************ 00:26:42.484 END TEST nvmf_multiconnection 00:26:42.484 ************************************ 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:42.484 ************************************ 00:26:42.484 START TEST nvmf_initiator_timeout 00:26:42.484 ************************************ 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:42.484 * Looking for test storage... 00:26:42.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.484 --rc genhtml_branch_coverage=1 00:26:42.484 --rc genhtml_function_coverage=1 00:26:42.484 --rc genhtml_legend=1 00:26:42.484 --rc geninfo_all_blocks=1 00:26:42.484 --rc geninfo_unexecuted_blocks=1 00:26:42.484 00:26:42.484 ' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.484 --rc genhtml_branch_coverage=1 00:26:42.484 --rc genhtml_function_coverage=1 00:26:42.484 --rc genhtml_legend=1 00:26:42.484 --rc geninfo_all_blocks=1 00:26:42.484 --rc geninfo_unexecuted_blocks=1 00:26:42.484 00:26:42.484 ' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.484 --rc genhtml_branch_coverage=1 00:26:42.484 --rc genhtml_function_coverage=1 00:26:42.484 --rc genhtml_legend=1 00:26:42.484 --rc geninfo_all_blocks=1 00:26:42.484 --rc geninfo_unexecuted_blocks=1 00:26:42.484 00:26:42.484 ' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.484 --rc genhtml_branch_coverage=1 00:26:42.484 --rc genhtml_function_coverage=1 00:26:42.484 --rc genhtml_legend=1 00:26:42.484 --rc geninfo_all_blocks=1 00:26:42.484 --rc geninfo_unexecuted_blocks=1 00:26:42.484 00:26:42.484 ' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.484 05:19:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.703 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:50.704 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:50.704 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:50.704 Found net devices under 0000:31:00.0: cvl_0_0 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:50.704 Found net devices under 0000:31:00.1: cvl_0_1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:26:50.704 00:26:50.704 --- 10.0.0.2 ping statistics --- 00:26:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.704 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:26:50.704 00:26:50.704 --- 10.0.0.1 ping statistics --- 00:26:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.704 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1647796 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1647796 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1647796 ']' 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.704 05:20:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.704 [2024-12-09 05:20:03.980341] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:26:50.704 [2024-12-09 05:20:03.980479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.704 [2024-12-09 05:20:04.147689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.704 [2024-12-09 05:20:04.274537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.704 [2024-12-09 05:20:04.274606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.705 [2024-12-09 05:20:04.274620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.705 [2024-12-09 05:20:04.274633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.705 [2024-12-09 05:20:04.274643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.705 [2024-12-09 05:20:04.277614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.705 [2024-12-09 05:20:04.277750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.705 [2024-12-09 05:20:04.277905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.705 [2024-12-09 05:20:04.277922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.007 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.007 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 Malloc0 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 Delay0 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 [2024-12-09 05:20:04.918945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 [2024-12-09 05:20:04.960953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.008 05:20:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:53.050 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:53.051 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:53.051 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.051 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:53.051 05:20:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1648538 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:54.965 05:20:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:54.965 [global] 00:26:54.965 thread=1 00:26:54.965 invalidate=1 00:26:54.965 rw=write 00:26:54.965 time_based=1 00:26:54.965 runtime=60 00:26:54.965 ioengine=libaio 00:26:54.965 direct=1 00:26:54.965 bs=4096 00:26:54.965 iodepth=1 00:26:54.965 norandommap=0 00:26:54.965 numjobs=1 00:26:54.965 00:26:54.965 verify_dump=1 00:26:54.966 verify_backlog=512 00:26:54.966 verify_state_save=0 00:26:54.966 do_verify=1 00:26:54.966 verify=crc32c-intel 00:26:54.966 [job0] 00:26:54.966 filename=/dev/nvme0n1 00:26:54.966 Could not set queue depth (nvme0n1) 00:26:54.966 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:54.966 fio-3.35 00:26:54.966 Starting 1 thread 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.266 true 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.266 true 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.266 true 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.266 true 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.266 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.811 true 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.811 true 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.811 true 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.811 true 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:00.811 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1648538 00:27:57.070 00:27:57.070 job0: (groupid=0, jobs=1): err= 0: pid=1648838: Mon Dec 9 05:21:09 2024 00:27:57.070 read: IOPS=58, BW=234KiB/s (240kB/s)(13.7MiB/60001msec) 00:27:57.070 slat (nsec): min=6541, max=67418, avg=27104.56, stdev=3382.87 00:27:57.070 clat (usec): min=344, max=41930k, avg=16417.02, stdev=707358.21 00:27:57.070 lat (usec): min=352, max=41931k, avg=16444.12, stdev=707358.21 00:27:57.070 clat percentiles (usec): 00:27:57.070 | 1.00th=[ 627], 5.00th=[ 799], 10.00th=[ 873], 00:27:57.070 | 20.00th=[ 930], 30.00th=[ 955], 40.00th=[ 971], 00:27:57.070 | 50.00th=[ 988], 60.00th=[ 1012], 70.00th=[ 1037], 00:27:57.070 | 80.00th=[ 1057], 90.00th=[ 1156], 95.00th=[ 41681], 00:27:57.070 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:57.070 | 99.95th=[ 44827], 99.99th=[17112761] 00:27:57.070 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60001msec); 0 zone resets 00:27:57.070 slat (usec): min=9, max=28801, avg=39.46, stdev=480.67 00:27:57.070 clat (usec): min=163, max=1204, avg=562.52, stdev=125.23 00:27:57.070 lat (usec): min=174, max=29690, avg=601.97, stdev=502.82 00:27:57.070 clat percentiles (usec): 00:27:57.070 | 1.00th=[ 285], 5.00th=[ 355], 10.00th=[ 408], 20.00th=[ 465], 00:27:57.070 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:27:57.070 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 775], 00:27:57.070 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 1004], 99.95th=[ 1156], 00:27:57.070 | 99.99th=[ 1205] 00:27:57.070 bw ( KiB/s): min= 216, max= 4096, per=100.00%, avg=2851.20, stdev=1372.07, samples=10 00:27:57.070 iops : min= 54, max= 1024, avg=712.80, stdev=343.02, samples=10 00:27:57.070 lat (usec) : 250=0.34%, 500=15.07%, 750=32.36%, 1000=30.84% 00:27:57.070 lat (msec) : 2=17.10%, 4=0.01%, 50=4.25%, >=2000=0.01% 00:27:57.070 cpu : usr=0.25%, sys=0.47%, ctx=7104, majf=0, minf=1 00:27:57.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.070 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:57.070 00:27:57.070 Run status group 0 (all jobs): 00:27:57.070 READ: bw=234KiB/s (240kB/s), 234KiB/s-234KiB/s (240kB/s-240kB/s), io=13.7MiB (14.4MB), run=60001-60001msec 00:27:57.070 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60001-60001msec 00:27:57.070 00:27:57.070 Disk stats (read/write): 00:27:57.070 nvme0n1: ios=3464/3584, merge=0/0, ticks=16861/1625, in_queue=18486, util=99.91% 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:57.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:57.070 nvmf hotplug test: fio successful as expected 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.070 rmmod nvme_tcp 00:27:57.070 rmmod nvme_fabrics 00:27:57.070 rmmod nvme_keyring 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1647796 ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1647796 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1647796 ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1647796 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647796 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647796' 00:27:57.070 killing process with pid 1647796 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1647796 00:27:57.070 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1647796 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.070 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.071 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.449 00:27:58.449 real 1m16.218s 00:27:58.449 user 4m38.131s 00:27:58.449 sys 0m7.967s 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:58.449 ************************************ 00:27:58.449 END TEST nvmf_initiator_timeout 00:27:58.449 ************************************ 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.449 05:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:06.620 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:06.620 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:06.620 Found net devices under 0000:31:00.0: cvl_0_0 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:06.620 Found net devices under 0000:31:00.1: cvl_0_1 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 ************************************ 00:28:06.620 START TEST nvmf_perf_adq 00:28:06.620 ************************************ 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.620 * Looking for test storage... 00:28:06.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.620 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.621 --rc genhtml_branch_coverage=1 00:28:06.621 --rc genhtml_function_coverage=1 00:28:06.621 --rc genhtml_legend=1 00:28:06.621 --rc geninfo_all_blocks=1 00:28:06.621 --rc geninfo_unexecuted_blocks=1 00:28:06.621 00:28:06.621 ' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.621 --rc genhtml_branch_coverage=1 00:28:06.621 --rc genhtml_function_coverage=1 00:28:06.621 --rc genhtml_legend=1 00:28:06.621 --rc geninfo_all_blocks=1 00:28:06.621 --rc geninfo_unexecuted_blocks=1 00:28:06.621 00:28:06.621 ' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.621 --rc genhtml_branch_coverage=1 00:28:06.621 --rc genhtml_function_coverage=1 00:28:06.621 --rc genhtml_legend=1 00:28:06.621 --rc geninfo_all_blocks=1 00:28:06.621 --rc geninfo_unexecuted_blocks=1 00:28:06.621 00:28:06.621 ' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.621 --rc genhtml_branch_coverage=1 00:28:06.621 --rc genhtml_function_coverage=1 00:28:06.621 --rc genhtml_legend=1 00:28:06.621 --rc geninfo_all_blocks=1 00:28:06.621 --rc geninfo_unexecuted_blocks=1 00:28:06.621 00:28:06.621 ' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.621 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:13.201 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.202 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:13.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:13.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:13.202 Found net devices under 0000:31:00.0: cvl_0_0 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:13.202 Found net devices under 0000:31:00.1: cvl_0_1 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:13.202 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:15.117 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:18.415 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.706 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.706 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.706 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.706 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.706 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.706 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:23.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:23.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:23.707 Found net devices under 0000:31:00.0: cvl_0_0 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:23.707 Found net devices under 0000:31:00.1: cvl_0_1 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.707 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:28:23.708 00:28:23.708 --- 10.0.0.2 ping statistics --- 00:28:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.708 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:28:23.708 00:28:23.708 --- 10.0.0.1 ping statistics --- 00:28:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.708 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1670742 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1670742 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1670742 ']' 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.708 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 [2024-12-09 05:21:37.478523] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:23.708 [2024-12-09 05:21:37.478652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.708 [2024-12-09 05:21:37.643912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.970 [2024-12-09 05:21:37.772618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.970 [2024-12-09 05:21:37.772688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.970 [2024-12-09 05:21:37.772701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.970 [2024-12-09 05:21:37.772714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.970 [2024-12-09 05:21:37.772724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.970 [2024-12-09 05:21:37.775796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.970 [2024-12-09 05:21:37.775948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.970 [2024-12-09 05:21:37.776014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.970 [2024-12-09 05:21:37.776041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.542 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 [2024-12-09 05:21:38.676917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 Malloc1 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.803 [2024-12-09 05:21:38.792282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.803 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.064 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1671012 00:28:25.064 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:25.064 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:26.978 "tick_rate": 2400000000, 00:28:26.978 "poll_groups": [ 00:28:26.978 { 00:28:26.978 "name": "nvmf_tgt_poll_group_000", 00:28:26.978 "admin_qpairs": 1, 00:28:26.978 "io_qpairs": 1, 00:28:26.978 "current_admin_qpairs": 1, 00:28:26.978 "current_io_qpairs": 1, 00:28:26.978 "pending_bdev_io": 0, 00:28:26.978 "completed_nvme_io": 16997, 00:28:26.978 "transports": [ 00:28:26.978 { 00:28:26.978 "trtype": "TCP" 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 }, 00:28:26.978 { 00:28:26.978 "name": "nvmf_tgt_poll_group_001", 00:28:26.978 "admin_qpairs": 0, 00:28:26.978 "io_qpairs": 1, 00:28:26.978 "current_admin_qpairs": 0, 00:28:26.978 "current_io_qpairs": 1, 00:28:26.978 "pending_bdev_io": 0, 00:28:26.978 "completed_nvme_io": 17617, 00:28:26.978 "transports": [ 00:28:26.978 { 00:28:26.978 "trtype": "TCP" 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 }, 00:28:26.978 { 00:28:26.978 "name": "nvmf_tgt_poll_group_002", 00:28:26.978 "admin_qpairs": 0, 00:28:26.978 "io_qpairs": 1, 00:28:26.978 "current_admin_qpairs": 0, 00:28:26.978 "current_io_qpairs": 1, 00:28:26.978 "pending_bdev_io": 0, 00:28:26.978 "completed_nvme_io": 16735, 00:28:26.978 "transports": [ 00:28:26.978 { 00:28:26.978 "trtype": "TCP" 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 }, 00:28:26.978 { 00:28:26.978 "name": "nvmf_tgt_poll_group_003", 00:28:26.978 "admin_qpairs": 0, 00:28:26.978 "io_qpairs": 1, 00:28:26.978 "current_admin_qpairs": 0, 00:28:26.978 "current_io_qpairs": 1, 00:28:26.978 "pending_bdev_io": 0, 00:28:26.978 "completed_nvme_io": 16547, 00:28:26.978 "transports": [ 00:28:26.978 { 00:28:26.978 "trtype": "TCP" 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 } 00:28:26.978 ] 00:28:26.978 }' 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:26.978 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1671012 00:28:35.114 Initializing NVMe Controllers 00:28:35.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:35.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:35.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:35.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:35.114 Initialization complete. Launching workers. 00:28:35.114 ======================================================== 00:28:35.114 Latency(us) 00:28:35.114 Device Information : IOPS MiB/s Average min max 00:28:35.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13111.50 51.22 4881.53 1274.07 11959.39 00:28:35.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13756.18 53.74 4651.63 1007.27 12592.94 00:28:35.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13299.70 51.95 4812.70 1325.75 12787.62 00:28:35.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13647.59 53.31 4697.04 1238.85 44012.03 00:28:35.114 ======================================================== 00:28:35.114 Total : 53814.97 210.21 4758.97 1007.27 44012.03 00:28:35.114 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.114 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.114 rmmod nvme_tcp 00:28:35.374 rmmod nvme_fabrics 00:28:35.374 rmmod nvme_keyring 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1670742 ']' 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1670742 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1670742 ']' 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1670742 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670742 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670742' 00:28:35.374 killing process with pid 1670742 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1670742 00:28:35.374 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1670742 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.949 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.493 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.493 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:38.493 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:38.493 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:39.448 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:41.997 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.284 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:47.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:47.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:47.285 Found net devices under 0000:31:00.0: cvl_0_0 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:47.285 Found net devices under 0000:31:00.1: cvl_0_1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:28:47.285 00:28:47.285 --- 10.0.0.2 ping statistics --- 00:28:47.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.285 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:28:47.285 00:28:47.285 --- 10.0.0.1 ping statistics --- 00:28:47.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.285 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:47.285 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:47.285 net.core.busy_poll = 1 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:47.285 net.core.busy_read = 1 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:47.285 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1675779 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1675779 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1675779 ']' 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.547 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.547 [2024-12-09 05:22:01.417531] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:47.547 [2024-12-09 05:22:01.417662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.808 [2024-12-09 05:22:01.586942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.808 [2024-12-09 05:22:01.714625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.808 [2024-12-09 05:22:01.714695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.808 [2024-12-09 05:22:01.714708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.808 [2024-12-09 05:22:01.714722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.808 [2024-12-09 05:22:01.714732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.808 [2024-12-09 05:22:01.717608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.808 [2024-12-09 05:22:01.717742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.808 [2024-12-09 05:22:01.717893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.808 [2024-12-09 05:22:01.717930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.380 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.380 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:48.380 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.380 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.380 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.381 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.643 [2024-12-09 05:22:02.605786] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.643 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.905 Malloc1 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.905 [2024-12-09 05:22:02.725222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1675955 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:48.905 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:50.822 "tick_rate": 2400000000, 00:28:50.822 "poll_groups": [ 00:28:50.822 { 00:28:50.822 "name": "nvmf_tgt_poll_group_000", 00:28:50.822 "admin_qpairs": 1, 00:28:50.822 "io_qpairs": 4, 00:28:50.822 "current_admin_qpairs": 1, 00:28:50.822 "current_io_qpairs": 4, 00:28:50.822 "pending_bdev_io": 0, 00:28:50.822 "completed_nvme_io": 31092, 00:28:50.822 "transports": [ 00:28:50.822 { 00:28:50.822 "trtype": "TCP" 00:28:50.822 } 00:28:50.822 ] 00:28:50.822 }, 00:28:50.822 { 00:28:50.822 "name": "nvmf_tgt_poll_group_001", 00:28:50.822 "admin_qpairs": 0, 00:28:50.822 "io_qpairs": 0, 00:28:50.822 "current_admin_qpairs": 0, 00:28:50.822 "current_io_qpairs": 0, 00:28:50.822 "pending_bdev_io": 0, 00:28:50.822 "completed_nvme_io": 0, 00:28:50.822 "transports": [ 00:28:50.822 { 00:28:50.822 "trtype": "TCP" 00:28:50.822 } 00:28:50.822 ] 00:28:50.822 }, 00:28:50.822 { 00:28:50.822 "name": "nvmf_tgt_poll_group_002", 00:28:50.822 "admin_qpairs": 0, 00:28:50.822 "io_qpairs": 0, 00:28:50.822 "current_admin_qpairs": 0, 00:28:50.822 "current_io_qpairs": 0, 00:28:50.822 "pending_bdev_io": 0, 00:28:50.822 "completed_nvme_io": 0, 00:28:50.822 "transports": [ 00:28:50.822 { 00:28:50.822 "trtype": "TCP" 00:28:50.822 } 00:28:50.822 ] 00:28:50.822 }, 00:28:50.822 { 00:28:50.822 "name": "nvmf_tgt_poll_group_003", 00:28:50.822 "admin_qpairs": 0, 00:28:50.822 "io_qpairs": 0, 00:28:50.822 "current_admin_qpairs": 0, 00:28:50.822 "current_io_qpairs": 0, 00:28:50.822 "pending_bdev_io": 0, 00:28:50.822 "completed_nvme_io": 0, 00:28:50.822 "transports": [ 00:28:50.822 { 00:28:50.822 "trtype": "TCP" 00:28:50.822 } 00:28:50.822 ] 00:28:50.822 } 00:28:50.822 ] 00:28:50.822 }' 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:50.822 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1675955 00:28:58.958 Initializing NVMe Controllers 00:28:58.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:58.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:58.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:58.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:58.958 Initialization complete. Launching workers. 00:28:58.958 ======================================================== 00:28:58.958 Latency(us) 00:28:58.958 Device Information : IOPS MiB/s Average min max 00:28:58.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5865.20 22.91 10914.80 1195.85 62134.99 00:28:58.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6254.50 24.43 10235.12 1246.72 65167.70 00:28:58.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5910.50 23.09 10830.54 1216.10 60144.35 00:28:58.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4350.30 16.99 14717.25 1536.50 66055.45 00:28:58.958 ======================================================== 00:28:58.958 Total : 22380.50 87.42 11441.72 1195.85 66055.45 00:28:58.958 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.958 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.219 rmmod nvme_tcp 00:28:59.219 rmmod nvme_fabrics 00:28:59.219 rmmod nvme_keyring 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1675779 ']' 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1675779 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1675779 ']' 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1675779 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675779 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675779' 00:28:59.219 killing process with pid 1675779 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1675779 00:28:59.219 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1675779 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.790 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:03.089 00:29:03.089 real 0m57.381s 00:29:03.089 user 2m54.190s 00:29:03.089 sys 0m13.314s 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.089 ************************************ 00:29:03.089 END TEST nvmf_perf_adq 00:29:03.089 ************************************ 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:03.089 ************************************ 00:29:03.089 START TEST nvmf_shutdown 00:29:03.089 ************************************ 00:29:03.089 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.089 * Looking for test storage... 00:29:03.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:03.089 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:03.089 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:03.089 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:03.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.351 --rc genhtml_branch_coverage=1 00:29:03.351 --rc genhtml_function_coverage=1 00:29:03.351 --rc genhtml_legend=1 00:29:03.351 --rc geninfo_all_blocks=1 00:29:03.351 --rc geninfo_unexecuted_blocks=1 00:29:03.351 00:29:03.351 ' 00:29:03.351 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:03.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.351 --rc genhtml_branch_coverage=1 00:29:03.351 --rc genhtml_function_coverage=1 00:29:03.351 --rc genhtml_legend=1 00:29:03.351 --rc geninfo_all_blocks=1 00:29:03.351 --rc geninfo_unexecuted_blocks=1 00:29:03.351 00:29:03.351 ' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:03.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.352 --rc genhtml_branch_coverage=1 00:29:03.352 --rc genhtml_function_coverage=1 00:29:03.352 --rc genhtml_legend=1 00:29:03.352 --rc geninfo_all_blocks=1 00:29:03.352 --rc geninfo_unexecuted_blocks=1 00:29:03.352 00:29:03.352 ' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:03.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.352 --rc genhtml_branch_coverage=1 00:29:03.352 --rc genhtml_function_coverage=1 00:29:03.352 --rc genhtml_legend=1 00:29:03.352 --rc geninfo_all_blocks=1 00:29:03.352 --rc geninfo_unexecuted_blocks=1 00:29:03.352 00:29:03.352 ' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:03.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.352 ************************************ 00:29:03.352 START TEST nvmf_shutdown_tc1 00:29:03.352 ************************************ 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.352 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:11.490 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.490 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:11.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:11.491 Found net devices under 0000:31:00.0: cvl_0_0 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:11.491 Found net devices under 0000:31:00.1: cvl_0_1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:29:11.491 00:29:11.491 --- 10.0.0.2 ping statistics --- 00:29:11.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.491 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:29:11.491 00:29:11.491 --- 10.0.0.1 ping statistics --- 00:29:11.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.491 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1682659 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1682659 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1682659 ']' 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.491 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.491 [2024-12-09 05:22:25.014683] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:11.491 [2024-12-09 05:22:25.014827] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.491 [2024-12-09 05:22:25.179562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.491 [2024-12-09 05:22:25.305570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.491 [2024-12-09 05:22:25.305637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.492 [2024-12-09 05:22:25.305650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.492 [2024-12-09 05:22:25.305664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.492 [2024-12-09 05:22:25.305673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.492 [2024-12-09 05:22:25.308628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.492 [2024-12-09 05:22:25.308765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.492 [2024-12-09 05:22:25.308922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.492 [2024-12-09 05:22:25.308924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.065 [2024-12-09 05:22:25.855687] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.065 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.065 Malloc1 00:29:12.065 [2024-12-09 05:22:26.025328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.326 Malloc2 00:29:12.326 Malloc3 00:29:12.326 Malloc4 00:29:12.587 Malloc5 00:29:12.587 Malloc6 00:29:12.587 Malloc7 00:29:12.848 Malloc8 00:29:12.848 Malloc9 00:29:12.848 Malloc10 00:29:12.848 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.848 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.848 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.848 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1683046 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1683046 /var/tmp/bdevperf.sock 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1683046 ']' 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.110 "hdgst": ${hdgst:-false}, 00:29:13.110 "ddgst": ${ddgst:-false} 00:29:13.110 }, 00:29:13.110 "method": "bdev_nvme_attach_controller" 00:29:13.110 } 00:29:13.110 EOF 00:29:13.110 )") 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.110 "hdgst": ${hdgst:-false}, 00:29:13.110 "ddgst": ${ddgst:-false} 00:29:13.110 }, 00:29:13.110 "method": "bdev_nvme_attach_controller" 00:29:13.110 } 00:29:13.110 EOF 00:29:13.110 )") 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.110 "hdgst": ${hdgst:-false}, 00:29:13.110 "ddgst": ${ddgst:-false} 00:29:13.110 }, 00:29:13.110 "method": "bdev_nvme_attach_controller" 00:29:13.110 } 00:29:13.110 EOF 00:29:13.110 )") 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.110 "hdgst": ${hdgst:-false}, 00:29:13.110 "ddgst": ${ddgst:-false} 00:29:13.110 }, 00:29:13.110 "method": "bdev_nvme_attach_controller" 00:29:13.110 } 00:29:13.110 EOF 00:29:13.110 )") 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.110 "hdgst": ${hdgst:-false}, 00:29:13.110 "ddgst": ${ddgst:-false} 00:29:13.110 }, 00:29:13.110 "method": "bdev_nvme_attach_controller" 00:29:13.110 } 00:29:13.110 EOF 00:29:13.110 )") 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.110 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.110 { 00:29:13.110 "params": { 00:29:13.110 "name": "Nvme$subsystem", 00:29:13.110 "trtype": "$TEST_TRANSPORT", 00:29:13.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.110 "adrfam": "ipv4", 00:29:13.110 "trsvcid": "$NVMF_PORT", 00:29:13.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.111 "hdgst": ${hdgst:-false}, 00:29:13.111 "ddgst": ${ddgst:-false} 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 } 00:29:13.111 EOF 00:29:13.111 )") 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.111 { 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme$subsystem", 00:29:13.111 "trtype": "$TEST_TRANSPORT", 00:29:13.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "$NVMF_PORT", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.111 "hdgst": ${hdgst:-false}, 00:29:13.111 "ddgst": ${ddgst:-false} 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 } 00:29:13.111 EOF 00:29:13.111 )") 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.111 { 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme$subsystem", 00:29:13.111 "trtype": "$TEST_TRANSPORT", 00:29:13.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "$NVMF_PORT", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.111 "hdgst": ${hdgst:-false}, 00:29:13.111 "ddgst": ${ddgst:-false} 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 } 00:29:13.111 EOF 00:29:13.111 )") 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.111 { 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme$subsystem", 00:29:13.111 "trtype": "$TEST_TRANSPORT", 00:29:13.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "$NVMF_PORT", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.111 "hdgst": ${hdgst:-false}, 00:29:13.111 "ddgst": ${ddgst:-false} 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 } 00:29:13.111 EOF 00:29:13.111 )") 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.111 { 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme$subsystem", 00:29:13.111 "trtype": "$TEST_TRANSPORT", 00:29:13.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "$NVMF_PORT", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.111 "hdgst": ${hdgst:-false}, 00:29:13.111 "ddgst": ${ddgst:-false} 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 } 00:29:13.111 EOF 00:29:13.111 )") 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:13.111 [2024-12-09 05:22:26.946847] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:13.111 [2024-12-09 05:22:26.946969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:13.111 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme1", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme2", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme3", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme4", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme5", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme6", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme7", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme8", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme9", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 },{ 00:29:13.111 "params": { 00:29:13.111 "name": "Nvme10", 00:29:13.111 "trtype": "tcp", 00:29:13.111 "traddr": "10.0.0.2", 00:29:13.111 "adrfam": "ipv4", 00:29:13.111 "trsvcid": "4420", 00:29:13.111 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:13.111 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:13.111 "hdgst": false, 00:29:13.111 "ddgst": false 00:29:13.111 }, 00:29:13.111 "method": "bdev_nvme_attach_controller" 00:29:13.111 }' 00:29:13.371 [2024-12-09 05:22:27.106743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.372 [2024-12-09 05:22:27.234793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1683046 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:14.756 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:16.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1683046 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1682659 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.136 { 00:29:16.136 "params": { 00:29:16.136 "name": "Nvme$subsystem", 00:29:16.136 "trtype": "$TEST_TRANSPORT", 00:29:16.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.136 "adrfam": "ipv4", 00:29:16.136 "trsvcid": "$NVMF_PORT", 00:29:16.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.136 "hdgst": ${hdgst:-false}, 00:29:16.136 "ddgst": ${ddgst:-false} 00:29:16.136 }, 00:29:16.136 "method": "bdev_nvme_attach_controller" 00:29:16.136 } 00:29:16.136 EOF 00:29:16.136 )") 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.136 { 00:29:16.136 "params": { 00:29:16.136 "name": "Nvme$subsystem", 00:29:16.136 "trtype": "$TEST_TRANSPORT", 00:29:16.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.136 "adrfam": "ipv4", 00:29:16.136 "trsvcid": "$NVMF_PORT", 00:29:16.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.136 "hdgst": ${hdgst:-false}, 00:29:16.136 "ddgst": ${ddgst:-false} 00:29:16.136 }, 00:29:16.136 "method": "bdev_nvme_attach_controller" 00:29:16.136 } 00:29:16.136 EOF 00:29:16.136 )") 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.136 { 00:29:16.136 "params": { 00:29:16.136 "name": "Nvme$subsystem", 00:29:16.136 "trtype": "$TEST_TRANSPORT", 00:29:16.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.136 "adrfam": "ipv4", 00:29:16.136 "trsvcid": "$NVMF_PORT", 00:29:16.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.136 "hdgst": ${hdgst:-false}, 00:29:16.136 "ddgst": ${ddgst:-false} 00:29:16.136 }, 00:29:16.136 "method": "bdev_nvme_attach_controller" 00:29:16.136 } 00:29:16.136 EOF 00:29:16.136 )") 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.136 { 00:29:16.136 "params": { 00:29:16.136 "name": "Nvme$subsystem", 00:29:16.136 "trtype": "$TEST_TRANSPORT", 00:29:16.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.136 "adrfam": "ipv4", 00:29:16.136 "trsvcid": "$NVMF_PORT", 00:29:16.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.136 "hdgst": ${hdgst:-false}, 00:29:16.136 "ddgst": ${ddgst:-false} 00:29:16.136 }, 00:29:16.136 "method": "bdev_nvme_attach_controller" 00:29:16.136 } 00:29:16.136 EOF 00:29:16.136 )") 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.136 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.137 { 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme$subsystem", 00:29:16.137 "trtype": "$TEST_TRANSPORT", 00:29:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "$NVMF_PORT", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.137 "hdgst": ${hdgst:-false}, 00:29:16.137 "ddgst": ${ddgst:-false} 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 } 00:29:16.137 EOF 00:29:16.137 )") 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.137 [2024-12-09 05:22:29.773189] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:16.137 [2024-12-09 05:22:29.773291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683454 ] 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:16.137 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme1", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme2", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme3", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme4", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme5", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme6", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme7", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme8", 00:29:16.137 "trtype": "tcp", 00:29:16.137 "traddr": "10.0.0.2", 00:29:16.137 "adrfam": "ipv4", 00:29:16.137 "trsvcid": "4420", 00:29:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.137 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.137 "hdgst": false, 00:29:16.137 "ddgst": false 00:29:16.137 }, 00:29:16.137 "method": "bdev_nvme_attach_controller" 00:29:16.137 },{ 00:29:16.137 "params": { 00:29:16.137 "name": "Nvme9", 00:29:16.137 "trtype": "tcp", 00:29:16.138 "traddr": "10.0.0.2", 00:29:16.138 "adrfam": "ipv4", 00:29:16.138 "trsvcid": "4420", 00:29:16.138 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.138 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.138 "hdgst": false, 00:29:16.138 "ddgst": false 00:29:16.138 }, 00:29:16.138 "method": "bdev_nvme_attach_controller" 00:29:16.138 },{ 00:29:16.138 "params": { 00:29:16.138 "name": "Nvme10", 00:29:16.138 "trtype": "tcp", 00:29:16.138 "traddr": "10.0.0.2", 00:29:16.138 "adrfam": "ipv4", 00:29:16.138 "trsvcid": "4420", 00:29:16.138 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.138 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.138 "hdgst": false, 00:29:16.138 "ddgst": false 00:29:16.138 }, 00:29:16.138 "method": "bdev_nvme_attach_controller" 00:29:16.138 }' 00:29:16.138 [2024-12-09 05:22:29.916685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.138 [2024-12-09 05:22:30.016273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.525 Running I/O for 1 seconds... 00:29:18.906 1731.00 IOPS, 108.19 MiB/s 00:29:18.906 Latency(us) 00:29:18.906 [2024-12-09T04:22:32.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme1n1 : 1.15 222.93 13.93 0.00 0.00 283959.04 20534.61 246415.36 00:29:18.907 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme2n1 : 1.15 222.26 13.89 0.00 0.00 279869.65 14964.05 270882.13 00:29:18.907 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme3n1 : 1.14 225.03 14.06 0.00 0.00 271313.92 14090.24 276125.01 00:29:18.907 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme4n1 : 1.14 224.08 14.01 0.00 0.00 267431.25 19114.67 263891.63 00:29:18.907 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme5n1 : 1.16 221.11 13.82 0.00 0.00 266267.73 33860.27 260396.37 00:29:18.907 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme6n1 : 1.20 213.84 13.36 0.00 0.00 270404.69 32986.45 277872.64 00:29:18.907 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme7n1 : 1.13 226.65 14.17 0.00 0.00 249264.53 10048.85 277872.64 00:29:18.907 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme8n1 : 1.18 216.07 13.50 0.00 0.00 257144.53 15728.64 286610.77 00:29:18.907 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme9n1 : 1.20 214.16 13.38 0.00 0.00 255666.13 15291.73 284863.15 00:29:18.907 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.907 Verification LBA range: start 0x0 length 0x400 00:29:18.907 Nvme10n1 : 1.21 264.74 16.55 0.00 0.00 203267.75 11304.96 262144.00 00:29:18.907 [2024-12-09T04:22:32.904Z] =================================================================================================================== 00:29:18.907 [2024-12-09T04:22:32.904Z] Total : 2250.87 140.68 0.00 0.00 259064.02 10048.85 286610.77 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.478 rmmod nvme_tcp 00:29:19.478 rmmod nvme_fabrics 00:29:19.478 rmmod nvme_keyring 00:29:19.478 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1682659 ']' 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1682659 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1682659 ']' 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1682659 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1682659 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1682659' 00:29:19.740 killing process with pid 1682659 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1682659 00:29:19.740 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1682659 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.175 05:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.177 00:29:23.177 real 0m19.874s 00:29:23.177 user 0m44.931s 00:29:23.177 sys 0m7.679s 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:23.177 ************************************ 00:29:23.177 END TEST nvmf_shutdown_tc1 00:29:23.177 ************************************ 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.177 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.437 ************************************ 00:29:23.437 START TEST nvmf_shutdown_tc2 00:29:23.438 ************************************ 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:23.438 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:23.438 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:23.438 Found net devices under 0000:31:00.0: cvl_0_0 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:23.438 Found net devices under 0000:31:00.1: cvl_0_1 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.438 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.439 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:29:23.699 00:29:23.699 --- 10.0.0.2 ping statistics --- 00:29:23.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.699 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:29:23.699 00:29:23.699 --- 10.0.0.1 ping statistics --- 00:29:23.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.699 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1685198 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1685198 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1685198 ']' 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.699 05:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.699 [2024-12-09 05:22:37.669234] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:23.699 [2024-12-09 05:22:37.669348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.959 [2024-12-09 05:22:37.821330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.959 [2024-12-09 05:22:37.904409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.959 [2024-12-09 05:22:37.904446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.959 [2024-12-09 05:22:37.904454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.959 [2024-12-09 05:22:37.904463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.959 [2024-12-09 05:22:37.904469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.959 [2024-12-09 05:22:37.906425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.959 [2024-12-09 05:22:37.906545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.959 [2024-12-09 05:22:37.906637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.959 [2024-12-09 05:22:37.906662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.527 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.527 [2024-12-09 05:22:38.482706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.528 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.787 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.787 Malloc1 00:29:24.787 [2024-12-09 05:22:38.620375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.787 Malloc2 00:29:24.787 Malloc3 00:29:25.046 Malloc4 00:29:25.046 Malloc5 00:29:25.046 Malloc6 00:29:25.046 Malloc7 00:29:25.306 Malloc8 00:29:25.306 Malloc9 00:29:25.306 Malloc10 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1685568 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1685568 /var/tmp/bdevperf.sock 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1685568 ']' 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.306 { 00:29:25.306 "params": { 00:29:25.306 "name": "Nvme$subsystem", 00:29:25.306 "trtype": "$TEST_TRANSPORT", 00:29:25.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.306 "adrfam": "ipv4", 00:29:25.306 "trsvcid": "$NVMF_PORT", 00:29:25.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.306 "hdgst": ${hdgst:-false}, 00:29:25.306 "ddgst": ${ddgst:-false} 00:29:25.306 }, 00:29:25.306 "method": "bdev_nvme_attach_controller" 00:29:25.306 } 00:29:25.306 EOF 00:29:25.306 )") 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.306 { 00:29:25.306 "params": { 00:29:25.306 "name": "Nvme$subsystem", 00:29:25.306 "trtype": "$TEST_TRANSPORT", 00:29:25.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.306 "adrfam": "ipv4", 00:29:25.306 "trsvcid": "$NVMF_PORT", 00:29:25.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.306 "hdgst": ${hdgst:-false}, 00:29:25.306 "ddgst": ${ddgst:-false} 00:29:25.306 }, 00:29:25.306 "method": "bdev_nvme_attach_controller" 00:29:25.306 } 00:29:25.306 EOF 00:29:25.306 )") 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.306 { 00:29:25.306 "params": { 00:29:25.306 "name": "Nvme$subsystem", 00:29:25.306 "trtype": "$TEST_TRANSPORT", 00:29:25.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.306 "adrfam": "ipv4", 00:29:25.306 "trsvcid": "$NVMF_PORT", 00:29:25.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.306 "hdgst": ${hdgst:-false}, 00:29:25.306 "ddgst": ${ddgst:-false} 00:29:25.306 }, 00:29:25.306 "method": "bdev_nvme_attach_controller" 00:29:25.306 } 00:29:25.306 EOF 00:29:25.306 )") 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.306 { 00:29:25.306 "params": { 00:29:25.306 "name": "Nvme$subsystem", 00:29:25.306 "trtype": "$TEST_TRANSPORT", 00:29:25.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.306 "adrfam": "ipv4", 00:29:25.306 "trsvcid": "$NVMF_PORT", 00:29:25.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.306 "hdgst": ${hdgst:-false}, 00:29:25.306 "ddgst": ${ddgst:-false} 00:29:25.306 }, 00:29:25.306 "method": "bdev_nvme_attach_controller" 00:29:25.306 } 00:29:25.306 EOF 00:29:25.306 )") 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.306 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.306 { 00:29:25.306 "params": { 00:29:25.306 "name": "Nvme$subsystem", 00:29:25.306 "trtype": "$TEST_TRANSPORT", 00:29:25.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.306 "adrfam": "ipv4", 00:29:25.306 "trsvcid": "$NVMF_PORT", 00:29:25.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.306 "hdgst": ${hdgst:-false}, 00:29:25.306 "ddgst": ${ddgst:-false} 00:29:25.306 }, 00:29:25.306 "method": "bdev_nvme_attach_controller" 00:29:25.306 } 00:29:25.306 EOF 00:29:25.306 )") 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.566 { 00:29:25.566 "params": { 00:29:25.566 "name": "Nvme$subsystem", 00:29:25.566 "trtype": "$TEST_TRANSPORT", 00:29:25.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.566 "adrfam": "ipv4", 00:29:25.566 "trsvcid": "$NVMF_PORT", 00:29:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.566 "hdgst": ${hdgst:-false}, 00:29:25.566 "ddgst": ${ddgst:-false} 00:29:25.566 }, 00:29:25.566 "method": "bdev_nvme_attach_controller" 00:29:25.566 } 00:29:25.566 EOF 00:29:25.566 )") 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.566 { 00:29:25.566 "params": { 00:29:25.566 "name": "Nvme$subsystem", 00:29:25.566 "trtype": "$TEST_TRANSPORT", 00:29:25.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.566 "adrfam": "ipv4", 00:29:25.566 "trsvcid": "$NVMF_PORT", 00:29:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.566 "hdgst": ${hdgst:-false}, 00:29:25.566 "ddgst": ${ddgst:-false} 00:29:25.566 }, 00:29:25.566 "method": "bdev_nvme_attach_controller" 00:29:25.566 } 00:29:25.566 EOF 00:29:25.566 )") 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.566 { 00:29:25.566 "params": { 00:29:25.566 "name": "Nvme$subsystem", 00:29:25.566 "trtype": "$TEST_TRANSPORT", 00:29:25.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.566 "adrfam": "ipv4", 00:29:25.566 "trsvcid": "$NVMF_PORT", 00:29:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.566 "hdgst": ${hdgst:-false}, 00:29:25.566 "ddgst": ${ddgst:-false} 00:29:25.566 }, 00:29:25.566 "method": "bdev_nvme_attach_controller" 00:29:25.566 } 00:29:25.566 EOF 00:29:25.566 )") 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.566 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.566 { 00:29:25.566 "params": { 00:29:25.566 "name": "Nvme$subsystem", 00:29:25.566 "trtype": "$TEST_TRANSPORT", 00:29:25.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.566 "adrfam": "ipv4", 00:29:25.566 "trsvcid": "$NVMF_PORT", 00:29:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.566 "hdgst": ${hdgst:-false}, 00:29:25.566 "ddgst": ${ddgst:-false} 00:29:25.566 }, 00:29:25.566 "method": "bdev_nvme_attach_controller" 00:29:25.567 } 00:29:25.567 EOF 00:29:25.567 )") 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.567 { 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme$subsystem", 00:29:25.567 "trtype": "$TEST_TRANSPORT", 00:29:25.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "$NVMF_PORT", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.567 "hdgst": ${hdgst:-false}, 00:29:25.567 "ddgst": ${ddgst:-false} 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 } 00:29:25.567 EOF 00:29:25.567 )") 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:25.567 [2024-12-09 05:22:39.340593] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:25.567 [2024-12-09 05:22:39.340698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685568 ] 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:25.567 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme1", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme2", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme3", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme4", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme5", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme6", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme7", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme8", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme9", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 },{ 00:29:25.567 "params": { 00:29:25.567 "name": "Nvme10", 00:29:25.567 "trtype": "tcp", 00:29:25.567 "traddr": "10.0.0.2", 00:29:25.567 "adrfam": "ipv4", 00:29:25.567 "trsvcid": "4420", 00:29:25.567 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:25.567 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:25.567 "hdgst": false, 00:29:25.567 "ddgst": false 00:29:25.567 }, 00:29:25.567 "method": "bdev_nvme_attach_controller" 00:29:25.567 }' 00:29:25.567 [2024-12-09 05:22:39.482814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.827 [2024-12-09 05:22:39.580824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.213 Running I/O for 10 seconds... 00:29:27.213 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.213 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:27.213 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:27.213 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.213 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:27.474 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:27.475 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.736 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.737 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.737 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:27.737 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:27.737 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1685568 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1685568 ']' 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1685568 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685568 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685568' 00:29:27.998 killing process with pid 1685568 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1685568 00:29:27.998 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1685568 00:29:28.260 2182.00 IOPS, 136.38 MiB/s [2024-12-09T04:22:42.257Z] Received shutdown signal, test time was about 1.029080 seconds 00:29:28.260 00:29:28.260 Latency(us) 00:29:28.260 [2024-12-09T04:22:42.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.260 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme1n1 : 0.96 199.37 12.46 0.00 0.00 317039.22 21080.75 284863.15 00:29:28.260 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme2n1 : 0.97 198.29 12.39 0.00 0.00 312181.48 15619.41 270882.13 00:29:28.260 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme3n1 : 0.98 260.50 16.28 0.00 0.00 232126.08 23592.96 263891.63 00:29:28.260 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme4n1 : 1.03 248.99 15.56 0.00 0.00 229603.84 19114.67 263891.63 00:29:28.260 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme5n1 : 0.95 202.96 12.68 0.00 0.00 285294.36 25231.36 267386.88 00:29:28.260 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme6n1 : 0.99 258.37 16.15 0.00 0.00 220085.76 30801.92 246415.36 00:29:28.260 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme7n1 : 0.96 209.07 13.07 0.00 0.00 260031.01 9011.20 262144.00 00:29:28.260 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme8n1 : 0.95 201.52 12.60 0.00 0.00 267882.38 19879.25 239424.85 00:29:28.260 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme9n1 : 0.98 196.56 12.29 0.00 0.00 269437.16 15728.64 288358.40 00:29:28.260 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.260 Verification LBA range: start 0x0 length 0x400 00:29:28.260 Nvme10n1 : 0.98 260.81 16.30 0.00 0.00 198458.67 18786.99 237677.23 00:29:28.260 [2024-12-09T04:22:42.257Z] =================================================================================================================== 00:29:28.260 [2024-12-09T04:22:42.257Z] Total : 2236.43 139.78 0.00 0.00 254628.52 9011.20 288358.40 00:29:28.832 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1685198 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.214 rmmod nvme_tcp 00:29:30.214 rmmod nvme_fabrics 00:29:30.214 rmmod nvme_keyring 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1685198 ']' 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1685198 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1685198 ']' 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1685198 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685198 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:30.214 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685198' 00:29:30.214 killing process with pid 1685198 00:29:30.215 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1685198 00:29:30.215 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1685198 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.597 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.513 00:29:33.513 real 0m10.264s 00:29:33.513 user 0m32.819s 00:29:33.513 sys 0m1.542s 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.513 ************************************ 00:29:33.513 END TEST nvmf_shutdown_tc2 00:29:33.513 ************************************ 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.513 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.774 ************************************ 00:29:33.774 START TEST nvmf_shutdown_tc3 00:29:33.774 ************************************ 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:33.774 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:33.774 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:33.774 Found net devices under 0000:31:00.0: cvl_0_0 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:33.774 Found net devices under 0000:31:00.1: cvl_0_1 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.774 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.775 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.775 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.775 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.775 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.775 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:29:34.035 00:29:34.035 --- 10.0.0.2 ping statistics --- 00:29:34.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.035 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:29:34.035 00:29:34.035 --- 10.0.0.1 ping statistics --- 00:29:34.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.035 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1687215 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1687215 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1687215 ']' 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.035 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.296 [2024-12-09 05:22:48.033169] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:34.296 [2024-12-09 05:22:48.033302] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.296 [2024-12-09 05:22:48.187014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.296 [2024-12-09 05:22:48.271539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.296 [2024-12-09 05:22:48.271581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.296 [2024-12-09 05:22:48.271590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.296 [2024-12-09 05:22:48.271599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.296 [2024-12-09 05:22:48.271606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.296 [2024-12-09 05:22:48.273666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.296 [2024-12-09 05:22:48.273840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.296 [2024-12-09 05:22:48.273925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.296 [2024-12-09 05:22:48.273952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.865 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.865 [2024-12-09 05:22:48.842044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.125 05:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.125 Malloc1 00:29:35.125 [2024-12-09 05:22:48.981407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.125 Malloc2 00:29:35.125 Malloc3 00:29:35.385 Malloc4 00:29:35.385 Malloc5 00:29:35.385 Malloc6 00:29:35.385 Malloc7 00:29:35.644 Malloc8 00:29:35.644 Malloc9 00:29:35.644 Malloc10 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1687532 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1687532 /var/tmp/bdevperf.sock 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1687532 ']' 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.644 { 00:29:35.644 "params": { 00:29:35.644 "name": "Nvme$subsystem", 00:29:35.644 "trtype": "$TEST_TRANSPORT", 00:29:35.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.644 "adrfam": "ipv4", 00:29:35.644 "trsvcid": "$NVMF_PORT", 00:29:35.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.644 "hdgst": ${hdgst:-false}, 00:29:35.644 "ddgst": ${ddgst:-false} 00:29:35.644 }, 00:29:35.644 "method": "bdev_nvme_attach_controller" 00:29:35.644 } 00:29:35.644 EOF 00:29:35.644 )") 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.644 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.644 { 00:29:35.644 "params": { 00:29:35.644 "name": "Nvme$subsystem", 00:29:35.645 "trtype": "$TEST_TRANSPORT", 00:29:35.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.645 "adrfam": "ipv4", 00:29:35.645 "trsvcid": "$NVMF_PORT", 00:29:35.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.645 "hdgst": ${hdgst:-false}, 00:29:35.645 "ddgst": ${ddgst:-false} 00:29:35.645 }, 00:29:35.645 "method": "bdev_nvme_attach_controller" 00:29:35.645 } 00:29:35.645 EOF 00:29:35.645 )") 00:29:35.645 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.908 { 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme$subsystem", 00:29:35.908 "trtype": "$TEST_TRANSPORT", 00:29:35.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "$NVMF_PORT", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.908 "hdgst": ${hdgst:-false}, 00:29:35.908 "ddgst": ${ddgst:-false} 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 } 00:29:35.908 EOF 00:29:35.908 )") 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:35.908 [2024-12-09 05:22:49.700361] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:35.908 [2024-12-09 05:22:49.700468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687532 ] 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:35.908 05:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme1", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme2", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme3", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme4", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme5", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme6", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme7", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme8", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme9", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 },{ 00:29:35.908 "params": { 00:29:35.908 "name": "Nvme10", 00:29:35.908 "trtype": "tcp", 00:29:35.908 "traddr": "10.0.0.2", 00:29:35.908 "adrfam": "ipv4", 00:29:35.908 "trsvcid": "4420", 00:29:35.908 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:35.908 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:35.908 "hdgst": false, 00:29:35.908 "ddgst": false 00:29:35.908 }, 00:29:35.908 "method": "bdev_nvme_attach_controller" 00:29:35.908 }' 00:29:35.908 [2024-12-09 05:22:49.844164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.169 [2024-12-09 05:22:49.942757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.552 Running I/O for 10 seconds... 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.504 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1687215 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1687215 ']' 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1687215 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687215 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687215' 00:29:38.505 killing process with pid 1687215 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1687215 00:29:38.505 05:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1687215 00:29:38.505 [2024-12-09 05:22:52.331829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.331996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.332295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.333999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.505 [2024-12-09 05:22:52.334026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.335999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.506 [2024-12-09 05:22:52.340342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.340645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.507 [2024-12-09 05:22:52.342471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.342650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.508 [2024-12-09 05:22:52.344932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.344984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.509 [2024-12-09 05:22:52.346647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.346653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.347890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.510 [2024-12-09 05:22:52.349482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.510 [2024-12-09 05:22:52.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.510 [2024-12-09 05:22:52.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.510 [2024-12-09 05:22:52.349570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.510 [2024-12-09 05:22:52.349585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.510 [2024-12-09 05:22:52.349596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.510 [2024-12-09 05:22:52.349610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.510 [2024-12-09 05:22:52.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.510 [2024-12-09 05:22:52.349634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.510 [2024-12-09 05:22:52.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.510 [2024-12-09 05:22:52.349658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.349980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.349991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.511 [2024-12-09 05:22:52.350583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.511 [2024-12-09 05:22:52.350596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.350977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.350990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.351023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.351055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.512 [2024-12-09 05:22:52.351079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:38.512 [2024-12-09 05:22:52.351411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039b000 is same with the state(6) to be set 00:29:38.512 [2024-12-09 05:22:52.351557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039a100 is same with the state(6) to be set 00:29:38.512 [2024-12-09 05:22:52.351684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000399200 is same with the state(6) to be set 00:29:38.512 [2024-12-09 05:22:52.351814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000395600 is same with the state(6) to be set 00:29:38.512 [2024-12-09 05:22:52.351951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.512 [2024-12-09 05:22:52.351976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.512 [2024-12-09 05:22:52.351987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.351999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000396500 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.352074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.352196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000394700 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.352320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000398300 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.352447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039bf00 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.352564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.513 [2024-12-09 05:22:52.352642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.352652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000397400 is same with the state(6) to be set 00:29:38.513 [2024-12-09 05:22:52.353236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.513 [2024-12-09 05:22:52.353473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.513 [2024-12-09 05:22:52.353486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.353959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.353972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.514 [2024-12-09 05:22:52.362530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.514 [2024-12-09 05:22:52.362543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.362980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.362991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.363006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.363029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f380 is same with the state(6) to be set 00:29:38.515 [2024-12-09 05:22:52.364908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039b000 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.364946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039a100 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.364965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000399200 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000395600 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000396500 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000394700 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398300 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039bf00 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000397400 (9): Bad file descriptor 00:29:38.515 [2024-12-09 05:22:52.365374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.515 [2024-12-09 05:22:52.365763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.515 [2024-12-09 05:22:52.365776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.365985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.365996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.516 [2024-12-09 05:22:52.366739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.516 [2024-12-09 05:22:52.366751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.366950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.368644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:38.517 [2024-12-09 05:22:52.370709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:38.517 [2024-12-09 05:22:52.370746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:38.517 [2024-12-09 05:22:52.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.517 [2024-12-09 05:22:52.371300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039bf00 with addr=10.0.0.2, port=4420 00:29:38.517 [2024-12-09 05:22:52.371317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039bf00 is same with the state(6) to be set 00:29:38.517 [2024-12-09 05:22:52.371963] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.372024] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.372068] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.372114] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.372157] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.372578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.517 [2024-12-09 05:22:52.372600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000396500 with addr=10.0.0.2, port=4420 00:29:38.517 [2024-12-09 05:22:52.372612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000396500 is same with the state(6) to be set 00:29:38.517 [2024-12-09 05:22:52.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.517 [2024-12-09 05:22:52.373110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000395600 with addr=10.0.0.2, port=4420 00:29:38.517 [2024-12-09 05:22:52.373125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000395600 is same with the state(6) to be set 00:29:38.517 [2024-12-09 05:22:52.373146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039bf00 (9): Bad file descriptor 00:29:38.517 [2024-12-09 05:22:52.373217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.373267] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.517 [2024-12-09 05:22:52.373876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.373904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.373930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.373956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.373967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.373980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.373991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.517 [2024-12-09 05:22:52.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.517 [2024-12-09 05:22:52.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.374986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.374996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.518 [2024-12-09 05:22:52.375202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.518 [2024-12-09 05:22:52.375213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.375405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.375416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0a00 is same with the state(6) to be set 00:29:38.519 [2024-12-09 05:22:52.375695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000396500 (9): Bad file descriptor 00:29:38.519 [2024-12-09 05:22:52.375715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000395600 (9): Bad file descriptor 00:29:38.519 [2024-12-09 05:22:52.375728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:38.519 [2024-12-09 05:22:52.375739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:38.519 [2024-12-09 05:22:52.375751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:38.519 [2024-12-09 05:22:52.375764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:38.519 [2024-12-09 05:22:52.375808] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:38.519 [2024-12-09 05:22:52.377395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:38.519 [2024-12-09 05:22:52.377435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:38.519 [2024-12-09 05:22:52.377446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:38.519 [2024-12-09 05:22:52.377457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:38.519 [2024-12-09 05:22:52.377467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:38.519 [2024-12-09 05:22:52.377478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:38.519 [2024-12-09 05:22:52.377488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:38.519 [2024-12-09 05:22:52.377498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:38.519 [2024-12-09 05:22:52.377508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:38.519 [2024-12-09 05:22:52.377545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.377974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.377987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.519 [2024-12-09 05:22:52.378134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.519 [2024-12-09 05:22:52.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.378987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.378998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.379011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.520 [2024-12-09 05:22:52.379022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.520 [2024-12-09 05:22:52.379035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.600481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.600633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.600677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.600719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.600753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.600857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.600898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.600969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:29:38.788 [2024-12-09 05:22:52.605551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.605669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.605754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.605786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.605838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.605871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.605941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.605979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.606938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.606978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.788 [2024-12-09 05:22:52.607867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.788 [2024-12-09 05:22:52.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.607936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.607967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.610016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.610050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.610089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.610122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.610161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.610193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.610233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.610266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.610301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e980 is same with the state(6) to be set 00:29:38.789 [2024-12-09 05:22:52.614770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.614844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.614908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.614942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.614982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.615014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.615047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.615071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.615100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.789 [2024-12-09 05:22:52.615153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.789 [2024-12-09 05:22:52.615176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.615969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.615998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.616949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.616972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.617052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.617077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.617105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.790 [2024-12-09 05:22:52.617129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.790 [2024-12-09 05:22:52.617157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.618034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.618086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.618190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.618245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.618271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f880 is same with the state(6) to be set 00:29:38.791 [2024-12-09 05:22:52.621479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.621949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.621978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.791 [2024-12-09 05:22:52.622426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.791 [2024-12-09 05:22:52.622455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.622957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.622985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.623957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.623981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.624008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.624032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.624060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.624087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.624116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.624139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.792 [2024-12-09 05:22:52.624167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.792 [2024-12-09 05:22:52.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.624880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.624904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0000 is same with the state(6) to be set 00:29:38.793 [2024-12-09 05:22:52.627234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.627968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.627986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.628004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.628020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.793 [2024-12-09 05:22:52.628040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.793 [2024-12-09 05:22:52.628056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.628966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.628984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.794 [2024-12-09 05:22:52.629412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.794 [2024-12-09 05:22:52.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.629448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.629463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0500 is same with the state(6) to be set 00:29:38.795 [2024-12-09 05:22:52.631631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.631980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.631998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.795 [2024-12-09 05:22:52.632944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.795 [2024-12-09 05:22:52.632963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.632979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.796 [2024-12-09 05:22:52.633854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.796 [2024-12-09 05:22:52.633870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(6) to be set 00:29:38.796 [2024-12-09 05:22:52.638786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:38.796 [2024-12-09 05:22:52.638826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:38.796 [2024-12-09 05:22:52.638843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:38.796 [2024-12-09 05:22:52.638859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:38.796 [2024-12-09 05:22:52.639297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-12-09 05:22:52.639347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039a100 with addr=10.0.0.2, port=4420 00:29:38.796 [2024-12-09 05:22:52.639366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039a100 is same with the state(6) to be set 00:29:38.796 [2024-12-09 05:22:52.639444] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:38.796 [2024-12-09 05:22:52.639466] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:38.796 [2024-12-09 05:22:52.639494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039a100 (9): Bad file descriptor 00:29:38.796 [2024-12-09 05:22:52.660170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:38.796 task offset: 20736 on job bdev=Nvme10n1 fails 00:29:38.796 00:29:38.796 Latency(us) 00:29:38.796 [2024-12-09T04:22:52.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.796 Job: Nvme1n1 ended in about 1.13 seconds with error 00:29:38.796 Verification LBA range: start 0x0 length 0x400 00:29:38.796 Nvme1n1 : 1.13 113.34 7.08 56.67 0.00 372748.23 24248.32 442149.55 00:29:38.796 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.796 Job: Nvme2n1 ended in about 1.14 seconds with error 00:29:38.796 Verification LBA range: start 0x0 length 0x400 00:29:38.796 Nvme2n1 : 1.14 112.42 7.03 56.21 0.00 369074.63 17476.27 401954.13 00:29:38.796 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.796 Job: Nvme3n1 ended in about 0.90 seconds with error 00:29:38.796 Verification LBA range: start 0x0 length 0x400 00:29:38.796 Nvme3n1 : 0.90 214.06 13.38 71.35 0.00 211354.24 15837.87 262144.00 00:29:38.796 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.796 Job: Nvme4n1 ended in about 0.90 seconds with error 00:29:38.796 Verification LBA range: start 0x0 length 0x400 00:29:38.796 Nvme4n1 : 0.90 214.45 13.40 71.48 0.00 205946.24 17148.59 265639.25 00:29:38.796 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme5n1 ended in about 1.15 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme5n1 : 1.15 115.16 7.20 55.83 0.00 344396.70 19442.35 414187.52 00:29:38.797 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme6n1 ended in about 1.15 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme6n1 : 1.15 111.03 6.94 55.51 0.00 347025.35 20206.93 389720.75 00:29:38.797 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme7n1 ended in about 1.16 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme7n1 : 1.16 110.61 6.91 55.31 0.00 341790.72 20643.84 510306.99 00:29:38.797 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme8n1 ended in about 0.90 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme8n1 : 0.90 145.99 9.12 70.78 0.00 245710.96 15728.64 263891.63 00:29:38.797 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme9n1 ended in about 1.16 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme9n1 : 1.16 110.20 6.89 55.10 0.00 330005.33 21080.75 361758.72 00:29:38.797 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.797 Job: Nvme10n1 ended in about 0.89 seconds with error 00:29:38.797 Verification LBA range: start 0x0 length 0x400 00:29:38.797 Nvme10n1 : 0.89 143.54 8.97 71.77 0.00 233486.79 14417.92 281367.89 00:29:38.797 [2024-12-09T04:22:52.794Z] =================================================================================================================== 00:29:38.797 [2024-12-09T04:22:52.794Z] Total : 1390.79 86.92 620.02 0.00 294437.35 14417.92 510306.99 00:29:38.797 [2024-12-09 05:22:52.732617] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:38.797 [2024-12-09 05:22:52.732689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:38.797 1390.79 IOPS, 86.92 MiB/s [2024-12-09T04:22:52.794Z] [2024-12-09 05:22:52.733071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.733101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.733117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.733459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.733475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.733491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000394700 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.733828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.733845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000397400 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.733855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000397400 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.734220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000398300 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.734247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000398300 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.734288] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.734308] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.734324] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.734342] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.734363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398300 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.734385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000397400 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.734402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000394700 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.734420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.737059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:38.797 [2024-12-09 05:22:52.737093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:38.797 [2024-12-09 05:22:52.737112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:38.797 [2024-12-09 05:22:52.737485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.737506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000399200 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.737519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000399200 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.737834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.737850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039b000 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.737862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039b000 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.737881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:38.797 [2024-12-09 05:22:52.737892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:38.797 [2024-12-09 05:22:52.737904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:38.797 [2024-12-09 05:22:52.737918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:38.797 [2024-12-09 05:22:52.737958] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.737978] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.737992] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.738007] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:38.797 [2024-12-09 05:22:52.738922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.738948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039bf00 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.738960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039bf00 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.739156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.739171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000395600 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.739181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000395600 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.739522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-12-09 05:22:52.739537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000396500 with addr=10.0.0.2, port=4420 00:29:38.797 [2024-12-09 05:22:52.739547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000396500 is same with the state(6) to be set 00:29:38.797 [2024-12-09 05:22:52.739563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000399200 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.739578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039b000 (9): Bad file descriptor 00:29:38.797 [2024-12-09 05:22:52.739590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:38.797 [2024-12-09 05:22:52.739600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:38.797 [2024-12-09 05:22:52.739611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:38.797 [2024-12-09 05:22:52.739622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:38.797 [2024-12-09 05:22:52.739632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:38.797 [2024-12-09 05:22:52.739641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.739652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.739661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.739672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.739681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.739690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.739699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.739709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.739721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.739730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.739739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.739846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:38.798 [2024-12-09 05:22:52.739878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039bf00 (9): Bad file descriptor 00:29:38.798 [2024-12-09 05:22:52.739893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000395600 (9): Bad file descriptor 00:29:38.798 [2024-12-09 05:22:52.739907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000396500 (9): Bad file descriptor 00:29:38.798 [2024-12-09 05:22:52.739920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.739929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.739939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.739948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.739963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.739972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.739982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.739992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.740407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-12-09 05:22:52.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039a100 with addr=10.0.0.2, port=4420 00:29:38.798 [2024-12-09 05:22:52.740439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039a100 is same with the state(6) to be set 00:29:38.798 [2024-12-09 05:22:52.740450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.740459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.740468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.740478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.740489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.740498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.740508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.740517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.740526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.740536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.740549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.740559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:38.798 [2024-12-09 05:22:52.740599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039a100 (9): Bad file descriptor 00:29:38.798 [2024-12-09 05:22:52.740638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:38.798 [2024-12-09 05:22:52.740649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:38.798 [2024-12-09 05:22:52.740659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:38.798 [2024-12-09 05:22:52.740667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:40.184 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1687532 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1687532 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1687532 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.128 rmmod nvme_tcp 00:29:41.128 rmmod nvme_fabrics 00:29:41.128 rmmod nvme_keyring 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1687215 ']' 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1687215 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1687215 ']' 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1687215 00:29:41.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1687215) - No such process 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1687215 is not found' 00:29:41.128 Process with pid 1687215 is not found 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.128 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.041 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.041 00:29:43.041 real 0m9.446s 00:29:43.041 user 0m25.152s 00:29:43.041 sys 0m1.543s 00:29:43.041 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.041 05:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.041 ************************************ 00:29:43.041 END TEST nvmf_shutdown_tc3 00:29:43.041 ************************************ 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.041 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:43.302 ************************************ 00:29:43.302 START TEST nvmf_shutdown_tc4 00:29:43.302 ************************************ 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.302 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:43.303 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:43.303 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:43.303 Found net devices under 0000:31:00.0: cvl_0_0 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:43.303 Found net devices under 0000:31:00.1: cvl_0_1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.303 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:29:43.564 00:29:43.564 --- 10.0.0.2 ping statistics --- 00:29:43.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.564 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:43.564 00:29:43.564 --- 10.0.0.1 ping statistics --- 00:29:43.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.564 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.564 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1689232 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1689232 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1689232 ']' 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.565 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.565 [2024-12-09 05:22:57.545032] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:43.565 [2024-12-09 05:22:57.545139] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.825 [2024-12-09 05:22:57.692563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.825 [2024-12-09 05:22:57.770033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.825 [2024-12-09 05:22:57.770068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.825 [2024-12-09 05:22:57.770077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.825 [2024-12-09 05:22:57.770085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.825 [2024-12-09 05:22:57.770092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.825 [2024-12-09 05:22:57.771893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.825 [2024-12-09 05:22:57.772181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.825 [2024-12-09 05:22:57.772275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.825 [2024-12-09 05:22:57.772296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.395 [2024-12-09 05:22:58.356622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.395 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.655 05:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.655 Malloc1 00:29:44.655 [2024-12-09 05:22:58.509696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.655 Malloc2 00:29:44.655 Malloc3 00:29:44.915 Malloc4 00:29:44.915 Malloc5 00:29:44.915 Malloc6 00:29:44.915 Malloc7 00:29:45.174 Malloc8 00:29:45.174 Malloc9 00:29:45.174 Malloc10 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1689615 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:45.174 05:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:45.435 [2024-12-09 05:22:59.278292] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1689232 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1689232 ']' 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1689232 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1689232 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1689232' 00:29:50.721 killing process with pid 1689232 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1689232 00:29:50.721 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1689232 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 starting I/O failed: -6 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 starting I/O failed: -6 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 starting I/O failed: -6 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 starting I/O failed: -6 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 Write completed with error (sct=0, sc=8) 00:29:50.721 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 [2024-12-09 05:23:04.244220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 [2024-12-09 05:23:04.245903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 [2024-12-09 05:23:04.247808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.722 Write completed with error (sct=0, sc=8) 00:29:50.722 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 [2024-12-09 05:23:04.255166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.723 NVMe io qpair process completion error 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 [2024-12-09 05:23:04.256863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 starting I/O failed: -6 00:29:50.723 Write completed with error (sct=0, sc=8) 00:29:50.723 [2024-12-09 05:23:04.258274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 [2024-12-09 05:23:04.260227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 [2024-12-09 05:23:04.267689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.724 NVMe io qpair process completion error 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.724 starting I/O failed: -6 00:29:50.724 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 [2024-12-09 05:23:04.269342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.725 starting I/O failed: -6 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 [2024-12-09 05:23:04.270981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 [2024-12-09 05:23:04.272910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.725 Write completed with error (sct=0, sc=8) 00:29:50.725 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 [2024-12-09 05:23:04.282566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.726 NVMe io qpair process completion error 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 starting I/O failed: -6 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 Write completed with error (sct=0, sc=8) 00:29:50.726 [2024-12-09 05:23:04.284408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 [2024-12-09 05:23:04.285847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 [2024-12-09 05:23:04.287789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.727 starting I/O failed: -6 00:29:50.727 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 [2024-12-09 05:23:04.298928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.728 NVMe io qpair process completion error 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 [2024-12-09 05:23:04.300514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.728 Write completed with error (sct=0, sc=8) 00:29:50.728 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 [2024-12-09 05:23:04.301899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 [2024-12-09 05:23:04.303822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.729 Write completed with error (sct=0, sc=8) 00:29:50.729 starting I/O failed: -6 00:29:50.730 [2024-12-09 05:23:04.313309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.730 NVMe io qpair process completion error 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 [2024-12-09 05:23:04.314843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 [2024-12-09 05:23:04.316459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 [2024-12-09 05:23:04.318299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.730 starting I/O failed: -6 00:29:50.730 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 [2024-12-09 05:23:04.327833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.731 NVMe io qpair process completion error 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 [2024-12-09 05:23:04.329416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.731 starting I/O failed: -6 00:29:50.731 starting I/O failed: -6 00:29:50.731 starting I/O failed: -6 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 Write completed with error (sct=0, sc=8) 00:29:50.731 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 [2024-12-09 05:23:04.331023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 [2024-12-09 05:23:04.332934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.732 starting I/O failed: -6 00:29:50.732 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 [2024-12-09 05:23:04.340258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.733 NVMe io qpair process completion error 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 [2024-12-09 05:23:04.341808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.733 starting I/O failed: -6 00:29:50.733 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 [2024-12-09 05:23:04.343203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 [2024-12-09 05:23:04.345110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.734 Write completed with error (sct=0, sc=8) 00:29:50.734 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 [2024-12-09 05:23:04.354779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.735 NVMe io qpair process completion error 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 [2024-12-09 05:23:04.356375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 [2024-12-09 05:23:04.357951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 Write completed with error (sct=0, sc=8) 00:29:50.735 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 [2024-12-09 05:23:04.359777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.736 starting I/O failed: -6 00:29:50.736 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 [2024-12-09 05:23:04.369237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.737 NVMe io qpair process completion error 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 [2024-12-09 05:23:04.370617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 [2024-12-09 05:23:04.372002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.737 starting I/O failed: -6 00:29:50.737 Write completed with error (sct=0, sc=8) 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 [2024-12-09 05:23:04.373930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 Write completed with error (sct=0, sc=8) 00:29:50.738 starting I/O failed: -6 00:29:50.738 [2024-12-09 05:23:04.387547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.738 NVMe io qpair process completion error 00:29:50.738 Initializing NVMe Controllers 00:29:50.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.738 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:50.739 Controller IO queue size 128, less than required. 00:29:50.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:50.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:50.739 Initialization complete. Launching workers. 00:29:50.739 ======================================================== 00:29:50.739 Latency(us) 00:29:50.739 Device Information : IOPS MiB/s Average min max 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.13 73.83 74519.59 1150.21 142334.38 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1714.17 73.66 74785.69 1427.84 158742.65 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1724.96 74.12 74419.00 833.43 157327.36 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1739.27 74.73 73938.18 941.00 155555.26 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1713.51 73.63 75198.55 1383.44 152794.38 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1641.51 70.53 78627.10 1150.55 180916.22 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1709.32 73.45 75638.90 1182.91 162418.21 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1700.30 73.06 76132.93 1133.45 171109.73 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1681.14 72.24 77131.93 1197.32 212790.82 00:29:50.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1723.86 74.07 75354.00 1442.45 193933.30 00:29:50.739 ======================================================== 00:29:50.739 Total : 17066.16 733.31 75554.61 833.43 212790.82 00:29:50.739 00:29:50.739 [2024-12-09 05:23:04.409909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.409972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028800 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029700 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029e80 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027900 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028080 is same with the state(6) to be set 00:29:50.739 [2024-12-09 05:23:04.410294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028f80 is same with the state(6) to be set 00:29:50.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:52.120 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1689615 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1689615 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1689615 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.061 rmmod nvme_tcp 00:29:53.061 rmmod nvme_fabrics 00:29:53.061 rmmod nvme_keyring 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1689232 ']' 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1689232 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1689232 ']' 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1689232 00:29:53.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1689232) - No such process 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1689232 is not found' 00:29:53.061 Process with pid 1689232 is not found 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.061 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.968 00:29:54.968 real 0m11.813s 00:29:54.968 user 0m33.349s 00:29:54.968 sys 0m3.835s 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:54.968 ************************************ 00:29:54.968 END TEST nvmf_shutdown_tc4 00:29:54.968 ************************************ 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:54.968 00:29:54.968 real 0m51.987s 00:29:54.968 user 2m16.528s 00:29:54.968 sys 0m14.947s 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.968 05:23:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:54.968 ************************************ 00:29:54.968 END TEST nvmf_shutdown 00:29:54.968 ************************************ 00:29:55.229 05:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:55.229 05:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:55.229 05:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.229 05:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:55.229 ************************************ 00:29:55.229 START TEST nvmf_nsid 00:29:55.229 ************************************ 00:29:55.229 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:55.229 * Looking for test storage... 00:29:55.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.230 --rc genhtml_branch_coverage=1 00:29:55.230 --rc genhtml_function_coverage=1 00:29:55.230 --rc genhtml_legend=1 00:29:55.230 --rc geninfo_all_blocks=1 00:29:55.230 --rc geninfo_unexecuted_blocks=1 00:29:55.230 00:29:55.230 ' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.230 --rc genhtml_branch_coverage=1 00:29:55.230 --rc genhtml_function_coverage=1 00:29:55.230 --rc genhtml_legend=1 00:29:55.230 --rc geninfo_all_blocks=1 00:29:55.230 --rc geninfo_unexecuted_blocks=1 00:29:55.230 00:29:55.230 ' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.230 --rc genhtml_branch_coverage=1 00:29:55.230 --rc genhtml_function_coverage=1 00:29:55.230 --rc genhtml_legend=1 00:29:55.230 --rc geninfo_all_blocks=1 00:29:55.230 --rc geninfo_unexecuted_blocks=1 00:29:55.230 00:29:55.230 ' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.230 --rc genhtml_branch_coverage=1 00:29:55.230 --rc genhtml_function_coverage=1 00:29:55.230 --rc genhtml_legend=1 00:29:55.230 --rc geninfo_all_blocks=1 00:29:55.230 --rc geninfo_unexecuted_blocks=1 00:29:55.230 00:29:55.230 ' 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.230 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.491 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.491 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:55.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.492 05:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:03.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:03.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.627 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:03.628 Found net devices under 0000:31:00.0: cvl_0_0 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:03.628 Found net devices under 0000:31:00.1: cvl_0_1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:30:03.628 00:30:03.628 --- 10.0.0.2 ping statistics --- 00:30:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.628 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:30:03.628 00:30:03.628 --- 10.0.0.1 ping statistics --- 00:30:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.628 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1695140 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1695140 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1695140 ']' 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.628 05:23:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 [2024-12-09 05:23:16.903220] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:03.628 [2024-12-09 05:23:16.903358] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.628 [2024-12-09 05:23:17.067953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.628 [2024-12-09 05:23:17.192535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.628 [2024-12-09 05:23:17.192602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.628 [2024-12-09 05:23:17.192615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.628 [2024-12-09 05:23:17.192628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.628 [2024-12-09 05:23:17.192643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.628 [2024-12-09 05:23:17.194182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1695359 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=524363f1-f2d1-4770-a209-eda7c4a93b17 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b92651c5-13c7-4edc-ba34-7e98f78d6e08 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d27efff2-625b-4e9b-9442-56fd162cabc3 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:03.890 null0 00:30:03.890 null1 00:30:03.890 null2 00:30:03.890 [2024-12-09 05:23:17.779138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.890 [2024-12-09 05:23:17.803515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.890 [2024-12-09 05:23:17.826007] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:03.890 [2024-12-09 05:23:17.826116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695359 ] 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1695359 /var/tmp/tgt2.sock 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1695359 ']' 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:03.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.890 05:23:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:04.151 [2024-12-09 05:23:17.979866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.151 [2024-12-09 05:23:18.105854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.092 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.092 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:05.092 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:05.352 [2024-12-09 05:23:19.171926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.352 [2024-12-09 05:23:19.188201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:05.352 nvme0n1 nvme0n2 00:30:05.352 nvme1n1 00:30:05.352 05:23:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:05.352 05:23:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:05.352 05:23:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:06.759 05:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 524363f1-f2d1-4770-a209-eda7c4a93b17 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=524363f1f2d14770a209eda7c4a93b17 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 524363F1F2D14770A209EDA7C4A93B17 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 524363F1F2D14770A209EDA7C4A93B17 == \5\2\4\3\6\3\F\1\F\2\D\1\4\7\7\0\A\2\0\9\E\D\A\7\C\4\A\9\3\B\1\7 ]] 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b92651c5-13c7-4edc-ba34-7e98f78d6e08 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b92651c513c74edcba347e98f78d6e08 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B92651C513C74EDCBA347E98F78D6E08 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B92651C513C74EDCBA347E98F78D6E08 == \B\9\2\6\5\1\C\5\1\3\C\7\4\E\D\C\B\A\3\4\7\E\9\8\F\7\8\D\6\E\0\8 ]] 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d27efff2-625b-4e9b-9442-56fd162cabc3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d27efff2625b4e9b944256fd162cabc3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D27EFFF2625B4E9B944256FD162CABC3 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D27EFFF2625B4E9B944256FD162CABC3 == \D\2\7\E\F\F\F\2\6\2\5\B\4\E\9\B\9\4\4\2\5\6\F\D\1\6\2\C\A\B\C\3 ]] 00:30:08.140 05:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1695359 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1695359 ']' 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1695359 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695359 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695359' 00:30:08.402 killing process with pid 1695359 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1695359 00:30:08.402 05:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1695359 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.788 rmmod nvme_tcp 00:30:09.788 rmmod nvme_fabrics 00:30:09.788 rmmod nvme_keyring 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1695140 ']' 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1695140 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1695140 ']' 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1695140 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695140 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695140' 00:30:09.788 killing process with pid 1695140 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1695140 00:30:09.788 05:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1695140 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.359 05:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.285 00:30:12.285 real 0m17.201s 00:30:12.285 user 0m15.236s 00:30:12.285 sys 0m7.181s 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:12.285 ************************************ 00:30:12.285 END TEST nvmf_nsid 00:30:12.285 ************************************ 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:12.285 00:30:12.285 real 19m14.207s 00:30:12.285 user 49m30.664s 00:30:12.285 sys 4m34.868s 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.285 05:23:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:12.285 ************************************ 00:30:12.285 END TEST nvmf_target_extra 00:30:12.285 ************************************ 00:30:12.546 05:23:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:12.546 05:23:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.546 05:23:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.546 05:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.546 ************************************ 00:30:12.546 START TEST nvmf_host 00:30:12.546 ************************************ 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:12.546 * Looking for test storage... 00:30:12.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.546 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.808 --rc genhtml_branch_coverage=1 00:30:12.808 --rc genhtml_function_coverage=1 00:30:12.808 --rc genhtml_legend=1 00:30:12.808 --rc geninfo_all_blocks=1 00:30:12.808 --rc geninfo_unexecuted_blocks=1 00:30:12.808 00:30:12.808 ' 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.808 --rc genhtml_branch_coverage=1 00:30:12.808 --rc genhtml_function_coverage=1 00:30:12.808 --rc genhtml_legend=1 00:30:12.808 --rc geninfo_all_blocks=1 00:30:12.808 --rc geninfo_unexecuted_blocks=1 00:30:12.808 00:30:12.808 ' 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.808 --rc genhtml_branch_coverage=1 00:30:12.808 --rc genhtml_function_coverage=1 00:30:12.808 --rc genhtml_legend=1 00:30:12.808 --rc geninfo_all_blocks=1 00:30:12.808 --rc geninfo_unexecuted_blocks=1 00:30:12.808 00:30:12.808 ' 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.808 --rc genhtml_branch_coverage=1 00:30:12.808 --rc genhtml_function_coverage=1 00:30:12.808 --rc genhtml_legend=1 00:30:12.808 --rc geninfo_all_blocks=1 00:30:12.808 --rc geninfo_unexecuted_blocks=1 00:30:12.808 00:30:12.808 ' 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.808 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:12.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.809 ************************************ 00:30:12.809 START TEST nvmf_multicontroller 00:30:12.809 ************************************ 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:12.809 * Looking for test storage... 00:30:12.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.809 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.071 --rc genhtml_branch_coverage=1 00:30:13.071 --rc genhtml_function_coverage=1 00:30:13.071 --rc genhtml_legend=1 00:30:13.071 --rc geninfo_all_blocks=1 00:30:13.071 --rc geninfo_unexecuted_blocks=1 00:30:13.071 00:30:13.071 ' 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.071 --rc genhtml_branch_coverage=1 00:30:13.071 --rc genhtml_function_coverage=1 00:30:13.071 --rc genhtml_legend=1 00:30:13.071 --rc geninfo_all_blocks=1 00:30:13.071 --rc geninfo_unexecuted_blocks=1 00:30:13.071 00:30:13.071 ' 00:30:13.071 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.072 --rc genhtml_branch_coverage=1 00:30:13.072 --rc genhtml_function_coverage=1 00:30:13.072 --rc genhtml_legend=1 00:30:13.072 --rc geninfo_all_blocks=1 00:30:13.072 --rc geninfo_unexecuted_blocks=1 00:30:13.072 00:30:13.072 ' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.072 --rc genhtml_branch_coverage=1 00:30:13.072 --rc genhtml_function_coverage=1 00:30:13.072 --rc genhtml_legend=1 00:30:13.072 --rc geninfo_all_blocks=1 00:30:13.072 --rc geninfo_unexecuted_blocks=1 00:30:13.072 00:30:13.072 ' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.072 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.073 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.073 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.073 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.073 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.073 05:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.213 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.214 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.214 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.214 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:30:21.215 00:30:21.215 --- 10.0.0.2 ping statistics --- 00:30:21.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.215 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:30:21.215 00:30:21.215 --- 10.0.0.1 ping statistics --- 00:30:21.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.215 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1700835 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1700835 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1700835 ']' 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.215 05:23:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.215 [2024-12-09 05:23:34.594633] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:21.215 [2024-12-09 05:23:34.594764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.215 [2024-12-09 05:23:34.762325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:21.215 [2024-12-09 05:23:34.893645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.215 [2024-12-09 05:23:34.893705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.215 [2024-12-09 05:23:34.893719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.215 [2024-12-09 05:23:34.893732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.215 [2024-12-09 05:23:34.893742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.215 [2024-12-09 05:23:34.896737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.215 [2024-12-09 05:23:34.896894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.215 [2024-12-09 05:23:34.896927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.475 [2024-12-09 05:23:35.416045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.475 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 Malloc0 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 [2024-12-09 05:23:35.535281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 [2024-12-09 05:23:35.547164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 Malloc1 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1701185 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1701185 /var/tmp/bdevperf.sock 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1701185 ']' 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.736 05:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 NVMe0n1 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.677 1 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.677 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.938 request: 00:30:22.938 { 00:30:22.938 "name": "NVMe0", 00:30:22.938 "trtype": "tcp", 00:30:22.938 "traddr": "10.0.0.2", 00:30:22.938 "adrfam": "ipv4", 00:30:22.938 "trsvcid": "4420", 00:30:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.938 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:22.938 "hostaddr": "10.0.0.1", 00:30:22.938 "prchk_reftag": false, 00:30:22.938 "prchk_guard": false, 00:30:22.938 "hdgst": false, 00:30:22.938 "ddgst": false, 00:30:22.938 "allow_unrecognized_csi": false, 00:30:22.938 "method": "bdev_nvme_attach_controller", 00:30:22.938 "req_id": 1 00:30:22.938 } 00:30:22.938 Got JSON-RPC error response 00:30:22.938 response: 00:30:22.938 { 00:30:22.938 "code": -114, 00:30:22.938 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.938 } 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.938 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.939 request: 00:30:22.939 { 00:30:22.939 "name": "NVMe0", 00:30:22.939 "trtype": "tcp", 00:30:22.939 "traddr": "10.0.0.2", 00:30:22.939 "adrfam": "ipv4", 00:30:22.939 "trsvcid": "4420", 00:30:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:22.939 "hostaddr": "10.0.0.1", 00:30:22.939 "prchk_reftag": false, 00:30:22.939 "prchk_guard": false, 00:30:22.939 "hdgst": false, 00:30:22.939 "ddgst": false, 00:30:22.939 "allow_unrecognized_csi": false, 00:30:22.939 "method": "bdev_nvme_attach_controller", 00:30:22.939 "req_id": 1 00:30:22.939 } 00:30:22.939 Got JSON-RPC error response 00:30:22.939 response: 00:30:22.939 { 00:30:22.939 "code": -114, 00:30:22.939 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.939 } 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.939 request: 00:30:22.939 { 00:30:22.939 "name": "NVMe0", 00:30:22.939 "trtype": "tcp", 00:30:22.939 "traddr": "10.0.0.2", 00:30:22.939 "adrfam": "ipv4", 00:30:22.939 "trsvcid": "4420", 00:30:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.939 "hostaddr": "10.0.0.1", 00:30:22.939 "prchk_reftag": false, 00:30:22.939 "prchk_guard": false, 00:30:22.939 "hdgst": false, 00:30:22.939 "ddgst": false, 00:30:22.939 "multipath": "disable", 00:30:22.939 "allow_unrecognized_csi": false, 00:30:22.939 "method": "bdev_nvme_attach_controller", 00:30:22.939 "req_id": 1 00:30:22.939 } 00:30:22.939 Got JSON-RPC error response 00:30:22.939 response: 00:30:22.939 { 00:30:22.939 "code": -114, 00:30:22.939 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:22.939 } 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.939 request: 00:30:22.939 { 00:30:22.939 "name": "NVMe0", 00:30:22.939 "trtype": "tcp", 00:30:22.939 "traddr": "10.0.0.2", 00:30:22.939 "adrfam": "ipv4", 00:30:22.939 "trsvcid": "4420", 00:30:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.939 "hostaddr": "10.0.0.1", 00:30:22.939 "prchk_reftag": false, 00:30:22.939 "prchk_guard": false, 00:30:22.939 "hdgst": false, 00:30:22.939 "ddgst": false, 00:30:22.939 "multipath": "failover", 00:30:22.939 "allow_unrecognized_csi": false, 00:30:22.939 "method": "bdev_nvme_attach_controller", 00:30:22.939 "req_id": 1 00:30:22.939 } 00:30:22.939 Got JSON-RPC error response 00:30:22.939 response: 00:30:22.939 { 00:30:22.939 "code": -114, 00:30:22.939 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.939 } 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.939 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.940 NVMe0n1 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.940 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.202 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.202 05:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:23.202 05:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:24.143 { 00:30:24.143 "results": [ 00:30:24.143 { 00:30:24.143 "job": "NVMe0n1", 00:30:24.143 "core_mask": "0x1", 00:30:24.143 "workload": "write", 00:30:24.143 "status": "finished", 00:30:24.143 "queue_depth": 128, 00:30:24.143 "io_size": 4096, 00:30:24.143 "runtime": 1.006515, 00:30:24.143 "iops": 24423.87843201542, 00:30:24.143 "mibps": 95.40577512506023, 00:30:24.143 "io_failed": 0, 00:30:24.143 "io_timeout": 0, 00:30:24.143 "avg_latency_us": 5227.32637459491, 00:30:24.143 "min_latency_us": 2757.9733333333334, 00:30:24.143 "max_latency_us": 13598.72 00:30:24.143 } 00:30:24.143 ], 00:30:24.143 "core_count": 1 00:30:24.143 } 00:30:24.143 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:24.143 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.143 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1701185 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1701185 ']' 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1701185 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701185 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701185' 00:30:24.404 killing process with pid 1701185 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1701185 00:30:24.404 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1701185 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:24.975 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:24.975 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:24.975 [2024-12-09 05:23:35.755616] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:24.975 [2024-12-09 05:23:35.755747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701185 ] 00:30:24.975 [2024-12-09 05:23:35.917602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.975 [2024-12-09 05:23:36.041656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.975 [2024-12-09 05:23:36.973151] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 8f020608-ede0-4126-87ec-0d8a9b98c86c already exists 00:30:24.975 [2024-12-09 05:23:36.973193] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:8f020608-ede0-4126-87ec-0d8a9b98c86c alias for bdev NVMe1n1 00:30:24.975 [2024-12-09 05:23:36.973209] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:24.975 Running I/O for 1 seconds... 00:30:24.975 24392.00 IOPS, 95.28 MiB/s 00:30:24.975 Latency(us) 00:30:24.975 [2024-12-09T04:23:38.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.975 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:24.975 NVMe0n1 : 1.01 24423.88 95.41 0.00 0.00 5227.33 2757.97 13598.72 00:30:24.975 [2024-12-09T04:23:38.972Z] =================================================================================================================== 00:30:24.975 [2024-12-09T04:23:38.972Z] Total : 24423.88 95.41 0.00 0.00 5227.33 2757.97 13598.72 00:30:24.975 Received shutdown signal, test time was about 1.000000 seconds 00:30:24.975 00:30:24.976 Latency(us) 00:30:24.976 [2024-12-09T04:23:38.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.976 [2024-12-09T04:23:38.973Z] =================================================================================================================== 00:30:24.976 [2024-12-09T04:23:38.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.976 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.976 rmmod nvme_tcp 00:30:24.976 rmmod nvme_fabrics 00:30:24.976 rmmod nvme_keyring 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1700835 ']' 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1700835 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1700835 ']' 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1700835 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.976 05:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1700835 00:30:25.235 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:25.235 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:25.235 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1700835' 00:30:25.235 killing process with pid 1700835 00:30:25.235 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1700835 00:30:25.235 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1700835 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.805 05:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.348 00:30:28.348 real 0m15.222s 00:30:28.348 user 0m19.919s 00:30:28.348 sys 0m6.820s 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 ************************************ 00:30:28.348 END TEST nvmf_multicontroller 00:30:28.348 ************************************ 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 ************************************ 00:30:28.348 START TEST nvmf_aer 00:30:28.348 ************************************ 00:30:28.348 05:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:28.348 * Looking for test storage... 00:30:28.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:28.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.348 --rc genhtml_branch_coverage=1 00:30:28.348 --rc genhtml_function_coverage=1 00:30:28.348 --rc genhtml_legend=1 00:30:28.348 --rc geninfo_all_blocks=1 00:30:28.348 --rc geninfo_unexecuted_blocks=1 00:30:28.348 00:30:28.348 ' 00:30:28.348 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:28.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.348 --rc genhtml_branch_coverage=1 00:30:28.348 --rc genhtml_function_coverage=1 00:30:28.348 --rc genhtml_legend=1 00:30:28.348 --rc geninfo_all_blocks=1 00:30:28.348 --rc geninfo_unexecuted_blocks=1 00:30:28.348 00:30:28.348 ' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.349 --rc genhtml_branch_coverage=1 00:30:28.349 --rc genhtml_function_coverage=1 00:30:28.349 --rc genhtml_legend=1 00:30:28.349 --rc geninfo_all_blocks=1 00:30:28.349 --rc geninfo_unexecuted_blocks=1 00:30:28.349 00:30:28.349 ' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.349 --rc genhtml_branch_coverage=1 00:30:28.349 --rc genhtml_function_coverage=1 00:30:28.349 --rc genhtml_legend=1 00:30:28.349 --rc geninfo_all_blocks=1 00:30:28.349 --rc geninfo_unexecuted_blocks=1 00:30:28.349 00:30:28.349 ' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:28.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.349 05:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:36.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:36.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:36.496 Found net devices under 0000:31:00.0: cvl_0_0 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:36.496 Found net devices under 0000:31:00.1: cvl_0_1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.496 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:30:36.496 00:30:36.496 --- 10.0.0.2 ping statistics --- 00:30:36.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.497 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:30:36.497 00:30:36.497 --- 10.0.0.1 ping statistics --- 00:30:36.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.497 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1706070 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1706070 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1706070 ']' 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.497 05:23:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.497 [2024-12-09 05:23:49.658874] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:36.497 [2024-12-09 05:23:49.658983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.497 [2024-12-09 05:23:49.826498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.497 [2024-12-09 05:23:49.955950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.497 [2024-12-09 05:23:49.956020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.497 [2024-12-09 05:23:49.956034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.497 [2024-12-09 05:23:49.956048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.497 [2024-12-09 05:23:49.956058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.497 [2024-12-09 05:23:49.958993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.497 [2024-12-09 05:23:49.959123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.497 [2024-12-09 05:23:49.959241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.497 [2024-12-09 05:23:49.959275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.497 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.497 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:36.497 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.497 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.497 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.758 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.758 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 [2024-12-09 05:23:50.500179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 Malloc0 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 [2024-12-09 05:23:50.616247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.759 [ 00:30:36.759 { 00:30:36.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:36.759 "subtype": "Discovery", 00:30:36.759 "listen_addresses": [], 00:30:36.759 "allow_any_host": true, 00:30:36.759 "hosts": [] 00:30:36.759 }, 00:30:36.759 { 00:30:36.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.759 "subtype": "NVMe", 00:30:36.759 "listen_addresses": [ 00:30:36.759 { 00:30:36.759 "trtype": "TCP", 00:30:36.759 "adrfam": "IPv4", 00:30:36.759 "traddr": "10.0.0.2", 00:30:36.759 "trsvcid": "4420" 00:30:36.759 } 00:30:36.759 ], 00:30:36.759 "allow_any_host": true, 00:30:36.759 "hosts": [], 00:30:36.759 "serial_number": "SPDK00000000000001", 00:30:36.759 "model_number": "SPDK bdev Controller", 00:30:36.759 "max_namespaces": 2, 00:30:36.759 "min_cntlid": 1, 00:30:36.759 "max_cntlid": 65519, 00:30:36.759 "namespaces": [ 00:30:36.759 { 00:30:36.759 "nsid": 1, 00:30:36.759 "bdev_name": "Malloc0", 00:30:36.759 "name": "Malloc0", 00:30:36.759 "nguid": "6B745FAFCB7D4CFE9D887DB9775CD474", 00:30:36.759 "uuid": "6b745faf-cb7d-4cfe-9d88-7db9775cd474" 00:30:36.759 } 00:30:36.759 ] 00:30:36.759 } 00:30:36.759 ] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1706264 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:36.759 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:37.020 05:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.280 Malloc1 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.280 [ 00:30:37.280 { 00:30:37.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:37.280 "subtype": "Discovery", 00:30:37.280 "listen_addresses": [], 00:30:37.280 "allow_any_host": true, 00:30:37.280 "hosts": [] 00:30:37.280 }, 00:30:37.280 { 00:30:37.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.280 "subtype": "NVMe", 00:30:37.280 "listen_addresses": [ 00:30:37.280 { 00:30:37.280 "trtype": "TCP", 00:30:37.280 "adrfam": "IPv4", 00:30:37.280 "traddr": "10.0.0.2", 00:30:37.280 "trsvcid": "4420" 00:30:37.280 } 00:30:37.280 ], 00:30:37.280 "allow_any_host": true, 00:30:37.280 "hosts": [], 00:30:37.280 "serial_number": "SPDK00000000000001", 00:30:37.280 "model_number": "SPDK bdev Controller", 00:30:37.280 "max_namespaces": 2, 00:30:37.280 "min_cntlid": 1, 00:30:37.280 "max_cntlid": 65519, 00:30:37.280 "namespaces": [ 00:30:37.280 { 00:30:37.280 "nsid": 1, 00:30:37.280 "bdev_name": "Malloc0", 00:30:37.280 "name": "Malloc0", 00:30:37.280 "nguid": "6B745FAFCB7D4CFE9D887DB9775CD474", 00:30:37.280 "uuid": "6b745faf-cb7d-4cfe-9d88-7db9775cd474" 00:30:37.280 }, 00:30:37.280 { 00:30:37.280 "nsid": 2, 00:30:37.280 "bdev_name": "Malloc1", 00:30:37.280 "name": "Malloc1", 00:30:37.280 "nguid": "3E49DF2A7FDD4BAD857A043F8EA475EF", 00:30:37.280 "uuid": "3e49df2a-7fdd-4bad-857a-043f8ea475ef" 00:30:37.280 } 00:30:37.280 ] 00:30:37.280 } 00:30:37.280 ] 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1706264 00:30:37.280 Asynchronous Event Request test 00:30:37.280 Attaching to 10.0.0.2 00:30:37.280 Attached to 10.0.0.2 00:30:37.280 Registering asynchronous event callbacks... 00:30:37.280 Starting namespace attribute notice tests for all controllers... 00:30:37.280 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:37.280 aer_cb - Changed Namespace 00:30:37.280 Cleaning up... 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.280 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.542 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.543 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:37.543 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.543 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.804 rmmod nvme_tcp 00:30:37.804 rmmod nvme_fabrics 00:30:37.804 rmmod nvme_keyring 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1706070 ']' 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1706070 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1706070 ']' 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1706070 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706070 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706070' 00:30:37.804 killing process with pid 1706070 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1706070 00:30:37.804 05:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1706070 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.745 05:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.293 00:30:41.293 real 0m12.786s 00:30:41.293 user 0m11.709s 00:30:41.293 sys 0m6.475s 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:41.293 ************************************ 00:30:41.293 END TEST nvmf_aer 00:30:41.293 ************************************ 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.293 ************************************ 00:30:41.293 START TEST nvmf_async_init 00:30:41.293 ************************************ 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:41.293 * Looking for test storage... 00:30:41.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.293 --rc genhtml_branch_coverage=1 00:30:41.293 --rc genhtml_function_coverage=1 00:30:41.293 --rc genhtml_legend=1 00:30:41.293 --rc geninfo_all_blocks=1 00:30:41.293 --rc geninfo_unexecuted_blocks=1 00:30:41.293 00:30:41.293 ' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.293 --rc genhtml_branch_coverage=1 00:30:41.293 --rc genhtml_function_coverage=1 00:30:41.293 --rc genhtml_legend=1 00:30:41.293 --rc geninfo_all_blocks=1 00:30:41.293 --rc geninfo_unexecuted_blocks=1 00:30:41.293 00:30:41.293 ' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.293 --rc genhtml_branch_coverage=1 00:30:41.293 --rc genhtml_function_coverage=1 00:30:41.293 --rc genhtml_legend=1 00:30:41.293 --rc geninfo_all_blocks=1 00:30:41.293 --rc geninfo_unexecuted_blocks=1 00:30:41.293 00:30:41.293 ' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.293 --rc genhtml_branch_coverage=1 00:30:41.293 --rc genhtml_function_coverage=1 00:30:41.293 --rc genhtml_legend=1 00:30:41.293 --rc geninfo_all_blocks=1 00:30:41.293 --rc geninfo_unexecuted_blocks=1 00:30:41.293 00:30:41.293 ' 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.293 05:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:41.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f3c126d7abfa484ba5874cc65e38609e 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.293 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.294 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.294 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:41.294 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:41.294 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.294 05:23:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:49.629 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:49.629 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.629 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:49.630 Found net devices under 0000:31:00.0: cvl_0_0 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:49.630 Found net devices under 0000:31:00.1: cvl_0_1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:49.630 00:30:49.630 --- 10.0.0.2 ping statistics --- 00:30:49.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.630 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:49.630 00:30:49.630 --- 10.0.0.1 ping statistics --- 00:30:49.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.630 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1710993 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1710993 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1710993 ']' 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.630 05:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.630 [2024-12-09 05:24:02.791562] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:49.630 [2024-12-09 05:24:02.791695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.630 [2024-12-09 05:24:02.956136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.631 [2024-12-09 05:24:03.077791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.631 [2024-12-09 05:24:03.077871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.631 [2024-12-09 05:24:03.077884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.631 [2024-12-09 05:24:03.077897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.631 [2024-12-09 05:24:03.077910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.631 [2024-12-09 05:24:03.079402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.631 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.631 [2024-12-09 05:24:03.618126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.891 null0 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f3c126d7abfa484ba5874cc65e38609e 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.891 [2024-12-09 05:24:03.678515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.891 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.151 nvme0n1 00:30:50.151 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.151 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:50.151 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.151 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.151 [ 00:30:50.151 { 00:30:50.151 "name": "nvme0n1", 00:30:50.151 "aliases": [ 00:30:50.151 "f3c126d7-abfa-484b-a587-4cc65e38609e" 00:30:50.151 ], 00:30:50.151 "product_name": "NVMe disk", 00:30:50.151 "block_size": 512, 00:30:50.151 "num_blocks": 2097152, 00:30:50.151 "uuid": "f3c126d7-abfa-484b-a587-4cc65e38609e", 00:30:50.151 "numa_id": 0, 00:30:50.151 "assigned_rate_limits": { 00:30:50.151 "rw_ios_per_sec": 0, 00:30:50.151 "rw_mbytes_per_sec": 0, 00:30:50.151 "r_mbytes_per_sec": 0, 00:30:50.151 "w_mbytes_per_sec": 0 00:30:50.151 }, 00:30:50.151 "claimed": false, 00:30:50.151 "zoned": false, 00:30:50.151 "supported_io_types": { 00:30:50.151 "read": true, 00:30:50.151 "write": true, 00:30:50.151 "unmap": false, 00:30:50.151 "flush": true, 00:30:50.151 "reset": true, 00:30:50.151 "nvme_admin": true, 00:30:50.151 "nvme_io": true, 00:30:50.151 "nvme_io_md": false, 00:30:50.151 "write_zeroes": true, 00:30:50.151 "zcopy": false, 00:30:50.151 "get_zone_info": false, 00:30:50.151 "zone_management": false, 00:30:50.151 "zone_append": false, 00:30:50.151 "compare": true, 00:30:50.151 "compare_and_write": true, 00:30:50.151 "abort": true, 00:30:50.151 "seek_hole": false, 00:30:50.151 "seek_data": false, 00:30:50.151 "copy": true, 00:30:50.151 "nvme_iov_md": false 00:30:50.151 }, 00:30:50.151 "memory_domains": [ 00:30:50.151 { 00:30:50.151 "dma_device_id": "system", 00:30:50.151 "dma_device_type": 1 00:30:50.151 } 00:30:50.151 ], 00:30:50.151 "driver_specific": { 00:30:50.151 "nvme": [ 00:30:50.151 { 00:30:50.151 "trid": { 00:30:50.151 "trtype": "TCP", 00:30:50.151 "adrfam": "IPv4", 00:30:50.151 "traddr": "10.0.0.2", 00:30:50.151 "trsvcid": "4420", 00:30:50.151 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:50.151 }, 00:30:50.151 "ctrlr_data": { 00:30:50.151 "cntlid": 1, 00:30:50.151 "vendor_id": "0x8086", 00:30:50.151 "model_number": "SPDK bdev Controller", 00:30:50.151 "serial_number": "00000000000000000000", 00:30:50.151 "firmware_revision": "25.01", 00:30:50.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.151 "oacs": { 00:30:50.151 "security": 0, 00:30:50.151 "format": 0, 00:30:50.151 "firmware": 0, 00:30:50.151 "ns_manage": 0 00:30:50.151 }, 00:30:50.151 "multi_ctrlr": true, 00:30:50.151 "ana_reporting": false 00:30:50.151 }, 00:30:50.151 "vs": { 00:30:50.151 "nvme_version": "1.3" 00:30:50.151 }, 00:30:50.151 "ns_data": { 00:30:50.151 "id": 1, 00:30:50.151 "can_share": true 00:30:50.152 } 00:30:50.152 } 00:30:50.152 ], 00:30:50.152 "mp_policy": "active_passive" 00:30:50.152 } 00:30:50.152 } 00:30:50.152 ] 00:30:50.152 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.152 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:50.152 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.152 05:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 [2024-12-09 05:24:03.956655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:50.152 [2024-12-09 05:24:03.956790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:30:50.152 [2024-12-09 05:24:04.089006] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 [ 00:30:50.152 { 00:30:50.152 "name": "nvme0n1", 00:30:50.152 "aliases": [ 00:30:50.152 "f3c126d7-abfa-484b-a587-4cc65e38609e" 00:30:50.152 ], 00:30:50.152 "product_name": "NVMe disk", 00:30:50.152 "block_size": 512, 00:30:50.152 "num_blocks": 2097152, 00:30:50.152 "uuid": "f3c126d7-abfa-484b-a587-4cc65e38609e", 00:30:50.152 "numa_id": 0, 00:30:50.152 "assigned_rate_limits": { 00:30:50.152 "rw_ios_per_sec": 0, 00:30:50.152 "rw_mbytes_per_sec": 0, 00:30:50.152 "r_mbytes_per_sec": 0, 00:30:50.152 "w_mbytes_per_sec": 0 00:30:50.152 }, 00:30:50.152 "claimed": false, 00:30:50.152 "zoned": false, 00:30:50.152 "supported_io_types": { 00:30:50.152 "read": true, 00:30:50.152 "write": true, 00:30:50.152 "unmap": false, 00:30:50.152 "flush": true, 00:30:50.152 "reset": true, 00:30:50.152 "nvme_admin": true, 00:30:50.152 "nvme_io": true, 00:30:50.152 "nvme_io_md": false, 00:30:50.152 "write_zeroes": true, 00:30:50.152 "zcopy": false, 00:30:50.152 "get_zone_info": false, 00:30:50.152 "zone_management": false, 00:30:50.152 "zone_append": false, 00:30:50.152 "compare": true, 00:30:50.152 "compare_and_write": true, 00:30:50.152 "abort": true, 00:30:50.152 "seek_hole": false, 00:30:50.152 "seek_data": false, 00:30:50.152 "copy": true, 00:30:50.152 "nvme_iov_md": false 00:30:50.152 }, 00:30:50.152 "memory_domains": [ 00:30:50.152 { 00:30:50.152 "dma_device_id": "system", 00:30:50.152 "dma_device_type": 1 00:30:50.152 } 00:30:50.152 ], 00:30:50.152 "driver_specific": { 00:30:50.152 "nvme": [ 00:30:50.152 { 00:30:50.152 "trid": { 00:30:50.152 "trtype": "TCP", 00:30:50.152 "adrfam": "IPv4", 00:30:50.152 "traddr": "10.0.0.2", 00:30:50.152 "trsvcid": "4420", 00:30:50.152 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:50.152 }, 00:30:50.152 "ctrlr_data": { 00:30:50.152 "cntlid": 2, 00:30:50.152 "vendor_id": "0x8086", 00:30:50.152 "model_number": "SPDK bdev Controller", 00:30:50.152 "serial_number": "00000000000000000000", 00:30:50.152 "firmware_revision": "25.01", 00:30:50.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.152 "oacs": { 00:30:50.152 "security": 0, 00:30:50.152 "format": 0, 00:30:50.152 "firmware": 0, 00:30:50.152 "ns_manage": 0 00:30:50.152 }, 00:30:50.152 "multi_ctrlr": true, 00:30:50.152 "ana_reporting": false 00:30:50.152 }, 00:30:50.152 "vs": { 00:30:50.152 "nvme_version": "1.3" 00:30:50.152 }, 00:30:50.152 "ns_data": { 00:30:50.152 "id": 1, 00:30:50.152 "can_share": true 00:30:50.152 } 00:30:50.152 } 00:30:50.152 ], 00:30:50.152 "mp_policy": "active_passive" 00:30:50.152 } 00:30:50.152 } 00:30:50.152 ] 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.152 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4VYDo1qn9Q 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4VYDo1qn9Q 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4VYDo1qn9Q 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 [2024-12-09 05:24:04.181449] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:50.414 [2024-12-09 05:24:04.181690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 [2024-12-09 05:24:04.205516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:50.414 nvme0n1 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 [ 00:30:50.414 { 00:30:50.414 "name": "nvme0n1", 00:30:50.414 "aliases": [ 00:30:50.414 "f3c126d7-abfa-484b-a587-4cc65e38609e" 00:30:50.414 ], 00:30:50.414 "product_name": "NVMe disk", 00:30:50.414 "block_size": 512, 00:30:50.414 "num_blocks": 2097152, 00:30:50.414 "uuid": "f3c126d7-abfa-484b-a587-4cc65e38609e", 00:30:50.414 "numa_id": 0, 00:30:50.414 "assigned_rate_limits": { 00:30:50.414 "rw_ios_per_sec": 0, 00:30:50.414 "rw_mbytes_per_sec": 0, 00:30:50.414 "r_mbytes_per_sec": 0, 00:30:50.414 "w_mbytes_per_sec": 0 00:30:50.414 }, 00:30:50.414 "claimed": false, 00:30:50.414 "zoned": false, 00:30:50.414 "supported_io_types": { 00:30:50.414 "read": true, 00:30:50.414 "write": true, 00:30:50.414 "unmap": false, 00:30:50.414 "flush": true, 00:30:50.414 "reset": true, 00:30:50.414 "nvme_admin": true, 00:30:50.414 "nvme_io": true, 00:30:50.414 "nvme_io_md": false, 00:30:50.414 "write_zeroes": true, 00:30:50.414 "zcopy": false, 00:30:50.414 "get_zone_info": false, 00:30:50.414 "zone_management": false, 00:30:50.414 "zone_append": false, 00:30:50.414 "compare": true, 00:30:50.414 "compare_and_write": true, 00:30:50.414 "abort": true, 00:30:50.414 "seek_hole": false, 00:30:50.414 "seek_data": false, 00:30:50.414 "copy": true, 00:30:50.414 "nvme_iov_md": false 00:30:50.414 }, 00:30:50.414 "memory_domains": [ 00:30:50.414 { 00:30:50.414 "dma_device_id": "system", 00:30:50.414 "dma_device_type": 1 00:30:50.414 } 00:30:50.414 ], 00:30:50.414 "driver_specific": { 00:30:50.414 "nvme": [ 00:30:50.414 { 00:30:50.414 "trid": { 00:30:50.414 "trtype": "TCP", 00:30:50.414 "adrfam": "IPv4", 00:30:50.414 "traddr": "10.0.0.2", 00:30:50.414 "trsvcid": "4421", 00:30:50.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:50.414 }, 00:30:50.414 "ctrlr_data": { 00:30:50.414 "cntlid": 3, 00:30:50.414 "vendor_id": "0x8086", 00:30:50.414 "model_number": "SPDK bdev Controller", 00:30:50.414 "serial_number": "00000000000000000000", 00:30:50.414 "firmware_revision": "25.01", 00:30:50.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.414 "oacs": { 00:30:50.414 "security": 0, 00:30:50.414 "format": 0, 00:30:50.414 "firmware": 0, 00:30:50.414 "ns_manage": 0 00:30:50.414 }, 00:30:50.414 "multi_ctrlr": true, 00:30:50.414 "ana_reporting": false 00:30:50.414 }, 00:30:50.414 "vs": { 00:30:50.414 "nvme_version": "1.3" 00:30:50.414 }, 00:30:50.414 "ns_data": { 00:30:50.414 "id": 1, 00:30:50.414 "can_share": true 00:30:50.414 } 00:30:50.414 } 00:30:50.414 ], 00:30:50.414 "mp_policy": "active_passive" 00:30:50.414 } 00:30:50.414 } 00:30:50.414 ] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4VYDo1qn9Q 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.414 rmmod nvme_tcp 00:30:50.414 rmmod nvme_fabrics 00:30:50.414 rmmod nvme_keyring 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1710993 ']' 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1710993 00:30:50.414 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1710993 ']' 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1710993 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1710993 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1710993' 00:30:50.674 killing process with pid 1710993 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1710993 00:30:50.674 05:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1710993 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.626 05:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.164 00:30:54.164 real 0m12.830s 00:30:54.164 user 0m5.031s 00:30:54.164 sys 0m6.354s 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.164 ************************************ 00:30:54.164 END TEST nvmf_async_init 00:30:54.164 ************************************ 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.164 ************************************ 00:30:54.164 START TEST dma 00:30:54.164 ************************************ 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:54.164 * Looking for test storage... 00:30:54.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.164 --rc genhtml_branch_coverage=1 00:30:54.164 --rc genhtml_function_coverage=1 00:30:54.164 --rc genhtml_legend=1 00:30:54.164 --rc geninfo_all_blocks=1 00:30:54.164 --rc geninfo_unexecuted_blocks=1 00:30:54.164 00:30:54.164 ' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.164 --rc genhtml_branch_coverage=1 00:30:54.164 --rc genhtml_function_coverage=1 00:30:54.164 --rc genhtml_legend=1 00:30:54.164 --rc geninfo_all_blocks=1 00:30:54.164 --rc geninfo_unexecuted_blocks=1 00:30:54.164 00:30:54.164 ' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.164 --rc genhtml_branch_coverage=1 00:30:54.164 --rc genhtml_function_coverage=1 00:30:54.164 --rc genhtml_legend=1 00:30:54.164 --rc geninfo_all_blocks=1 00:30:54.164 --rc geninfo_unexecuted_blocks=1 00:30:54.164 00:30:54.164 ' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.164 --rc genhtml_branch_coverage=1 00:30:54.164 --rc genhtml_function_coverage=1 00:30:54.164 --rc genhtml_legend=1 00:30:54.164 --rc geninfo_all_blocks=1 00:30:54.164 --rc geninfo_unexecuted_blocks=1 00:30:54.164 00:30:54.164 ' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:54.164 00:30:54.164 real 0m0.238s 00:30:54.164 user 0m0.135s 00:30:54.164 sys 0m0.118s 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:54.164 ************************************ 00:30:54.164 END TEST dma 00:30:54.164 ************************************ 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.164 05:24:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.164 ************************************ 00:30:54.164 START TEST nvmf_identify 00:30:54.164 ************************************ 00:30:54.164 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:54.164 * Looking for test storage... 00:30:54.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:54.164 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.165 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.165 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.426 --rc genhtml_branch_coverage=1 00:30:54.426 --rc genhtml_function_coverage=1 00:30:54.426 --rc genhtml_legend=1 00:30:54.426 --rc geninfo_all_blocks=1 00:30:54.426 --rc geninfo_unexecuted_blocks=1 00:30:54.426 00:30:54.426 ' 00:30:54.426 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.427 --rc genhtml_branch_coverage=1 00:30:54.427 --rc genhtml_function_coverage=1 00:30:54.427 --rc genhtml_legend=1 00:30:54.427 --rc geninfo_all_blocks=1 00:30:54.427 --rc geninfo_unexecuted_blocks=1 00:30:54.427 00:30:54.427 ' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.427 --rc genhtml_branch_coverage=1 00:30:54.427 --rc genhtml_function_coverage=1 00:30:54.427 --rc genhtml_legend=1 00:30:54.427 --rc geninfo_all_blocks=1 00:30:54.427 --rc geninfo_unexecuted_blocks=1 00:30:54.427 00:30:54.427 ' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.427 --rc genhtml_branch_coverage=1 00:30:54.427 --rc genhtml_function_coverage=1 00:30:54.427 --rc genhtml_legend=1 00:30:54.427 --rc geninfo_all_blocks=1 00:30:54.427 --rc geninfo_unexecuted_blocks=1 00:30:54.427 00:30:54.427 ' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.427 05:24:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:02.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:02.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:02.564 Found net devices under 0000:31:00.0: cvl_0_0 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:02.564 Found net devices under 0000:31:00.1: cvl_0_1 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.564 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:31:02.565 00:31:02.565 --- 10.0.0.2 ping statistics --- 00:31:02.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.565 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:31:02.565 00:31:02.565 --- 10.0.0.1 ping statistics --- 00:31:02.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.565 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1716306 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1716306 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1716306 ']' 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.565 05:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:02.565 [2024-12-09 05:24:16.058102] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:02.565 [2024-12-09 05:24:16.058225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.565 [2024-12-09 05:24:16.192426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.565 [2024-12-09 05:24:16.297437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.565 [2024-12-09 05:24:16.297502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.565 [2024-12-09 05:24:16.297512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.565 [2024-12-09 05:24:16.297522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.565 [2024-12-09 05:24:16.297530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.565 [2024-12-09 05:24:16.299997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.565 [2024-12-09 05:24:16.300133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.565 [2024-12-09 05:24:16.300234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.565 [2024-12-09 05:24:16.300263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 [2024-12-09 05:24:16.855545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 Malloc0 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 [2024-12-09 05:24:17.024304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.136 [ 00:31:03.136 { 00:31:03.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:03.136 "subtype": "Discovery", 00:31:03.136 "listen_addresses": [ 00:31:03.136 { 00:31:03.136 "trtype": "TCP", 00:31:03.136 "adrfam": "IPv4", 00:31:03.136 "traddr": "10.0.0.2", 00:31:03.136 "trsvcid": "4420" 00:31:03.136 } 00:31:03.136 ], 00:31:03.136 "allow_any_host": true, 00:31:03.136 "hosts": [] 00:31:03.136 }, 00:31:03.136 { 00:31:03.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.136 "subtype": "NVMe", 00:31:03.136 "listen_addresses": [ 00:31:03.136 { 00:31:03.136 "trtype": "TCP", 00:31:03.136 "adrfam": "IPv4", 00:31:03.136 "traddr": "10.0.0.2", 00:31:03.136 "trsvcid": "4420" 00:31:03.136 } 00:31:03.136 ], 00:31:03.136 "allow_any_host": true, 00:31:03.136 "hosts": [], 00:31:03.136 "serial_number": "SPDK00000000000001", 00:31:03.136 "model_number": "SPDK bdev Controller", 00:31:03.136 "max_namespaces": 32, 00:31:03.136 "min_cntlid": 1, 00:31:03.136 "max_cntlid": 65519, 00:31:03.136 "namespaces": [ 00:31:03.136 { 00:31:03.136 "nsid": 1, 00:31:03.136 "bdev_name": "Malloc0", 00:31:03.136 "name": "Malloc0", 00:31:03.136 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:03.136 "eui64": "ABCDEF0123456789", 00:31:03.136 "uuid": "dc374080-fd92-4ca4-b253-335727c4dc35" 00:31:03.136 } 00:31:03.136 ] 00:31:03.136 } 00:31:03.136 ] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.136 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:03.136 [2024-12-09 05:24:17.111495] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:03.136 [2024-12-09 05:24:17.111583] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716470 ] 00:31:03.402 [2024-12-09 05:24:17.191303] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:03.402 [2024-12-09 05:24:17.191436] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:03.402 [2024-12-09 05:24:17.191453] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:03.402 [2024-12-09 05:24:17.191479] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:03.402 [2024-12-09 05:24:17.191506] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:03.402 [2024-12-09 05:24:17.195421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:03.402 [2024-12-09 05:24:17.195502] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:03.402 [2024-12-09 05:24:17.202857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:03.402 [2024-12-09 05:24:17.202893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:03.402 [2024-12-09 05:24:17.202903] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:03.402 [2024-12-09 05:24:17.202910] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:03.402 [2024-12-09 05:24:17.202988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.203000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.203016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.402 [2024-12-09 05:24:17.203049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:03.402 [2024-12-09 05:24:17.203091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.402 [2024-12-09 05:24:17.210843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.402 [2024-12-09 05:24:17.210877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.402 [2024-12-09 05:24:17.210884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.210895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.402 [2024-12-09 05:24:17.210927] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:03.402 [2024-12-09 05:24:17.210948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:03.402 [2024-12-09 05:24:17.210960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:03.402 [2024-12-09 05:24:17.210982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.210993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.402 [2024-12-09 05:24:17.211021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.402 [2024-12-09 05:24:17.211051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.402 [2024-12-09 05:24:17.211334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.402 [2024-12-09 05:24:17.211347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.402 [2024-12-09 05:24:17.211354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.402 [2024-12-09 05:24:17.211374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:03.402 [2024-12-09 05:24:17.211396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:03.402 [2024-12-09 05:24:17.211408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.402 [2024-12-09 05:24:17.211441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.402 [2024-12-09 05:24:17.211460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.402 [2024-12-09 05:24:17.211667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.402 [2024-12-09 05:24:17.211677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.402 [2024-12-09 05:24:17.211683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.402 [2024-12-09 05:24:17.211699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:03.402 [2024-12-09 05:24:17.211713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:03.402 [2024-12-09 05:24:17.211724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.211746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.402 [2024-12-09 05:24:17.211759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.402 [2024-12-09 05:24:17.211775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.402 [2024-12-09 05:24:17.212001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.402 [2024-12-09 05:24:17.212011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.402 [2024-12-09 05:24:17.212017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.212023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.402 [2024-12-09 05:24:17.212033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:03.402 [2024-12-09 05:24:17.212051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.212059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.402 [2024-12-09 05:24:17.212065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.402 [2024-12-09 05:24:17.212078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.402 [2024-12-09 05:24:17.212094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.212354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.212363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.212369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.212384] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:03.403 [2024-12-09 05:24:17.212398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:03.403 [2024-12-09 05:24:17.212410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:03.403 [2024-12-09 05:24:17.212524] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:03.403 [2024-12-09 05:24:17.212533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:03.403 [2024-12-09 05:24:17.212555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.212581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.403 [2024-12-09 05:24:17.212597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.212868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.212878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.212884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.212899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:03.403 [2024-12-09 05:24:17.212924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.212940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.212952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.403 [2024-12-09 05:24:17.212968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.213179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.213189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.213194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.213209] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:03.403 [2024-12-09 05:24:17.213218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.213230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:03.403 [2024-12-09 05:24:17.213242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.213262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.213286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.403 [2024-12-09 05:24:17.213302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.213633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.403 [2024-12-09 05:24:17.213643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.403 [2024-12-09 05:24:17.213649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213657] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:03.403 [2024-12-09 05:24:17.213666] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.403 [2024-12-09 05:24:17.213674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213690] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213698] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.213837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.213843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.213868] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:03.403 [2024-12-09 05:24:17.213877] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:03.403 [2024-12-09 05:24:17.213891] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:03.403 [2024-12-09 05:24:17.213903] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:03.403 [2024-12-09 05:24:17.213916] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:03.403 [2024-12-09 05:24:17.213924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.213943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.213958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.213973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.213987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:03.403 [2024-12-09 05:24:17.214013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.214235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.214245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.214250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.214271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.214300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.403 [2024-12-09 05:24:17.214310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.214331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.403 [2024-12-09 05:24:17.214339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.214363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.403 [2024-12-09 05:24:17.214371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.214392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.403 [2024-12-09 05:24:17.214399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.214415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:03.403 [2024-12-09 05:24:17.214425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.403 [2024-12-09 05:24:17.214447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.403 [2024-12-09 05:24:17.214468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.403 [2024-12-09 05:24:17.214476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:03.403 [2024-12-09 05:24:17.214483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:03.403 [2024-12-09 05:24:17.214493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.403 [2024-12-09 05:24:17.214500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.403 [2024-12-09 05:24:17.214772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.403 [2024-12-09 05:24:17.214782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.403 [2024-12-09 05:24:17.214788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.403 [2024-12-09 05:24:17.214794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.403 [2024-12-09 05:24:17.214804] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:03.404 [2024-12-09 05:24:17.218082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:03.404 [2024-12-09 05:24:17.218132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.218142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.404 [2024-12-09 05:24:17.218159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.404 [2024-12-09 05:24:17.218243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.404 [2024-12-09 05:24:17.218523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.404 [2024-12-09 05:24:17.218536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.404 [2024-12-09 05:24:17.218546] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.218557] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:03.404 [2024-12-09 05:24:17.218566] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.404 [2024-12-09 05:24:17.218574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.218598] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.218607] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.263838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.404 [2024-12-09 05:24:17.263873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.404 [2024-12-09 05:24:17.263880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.263889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.404 [2024-12-09 05:24:17.263924] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:03.404 [2024-12-09 05:24:17.263987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.404 [2024-12-09 05:24:17.264020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.404 [2024-12-09 05:24:17.264033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.404 [2024-12-09 05:24:17.264063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.404 [2024-12-09 05:24:17.264094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.404 [2024-12-09 05:24:17.264103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.404 [2024-12-09 05:24:17.264501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.404 [2024-12-09 05:24:17.264516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.404 [2024-12-09 05:24:17.264523] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=1024, cccid=4 00:31:03.404 [2024-12-09 05:24:17.264538] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=1024 00:31:03.404 [2024-12-09 05:24:17.264549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264560] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264567] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.404 [2024-12-09 05:24:17.264586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.404 [2024-12-09 05:24:17.264591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.264598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.404 [2024-12-09 05:24:17.305030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.404 [2024-12-09 05:24:17.305063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.404 [2024-12-09 05:24:17.305070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.404 [2024-12-09 05:24:17.305124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.404 [2024-12-09 05:24:17.305150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.404 [2024-12-09 05:24:17.305180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.404 [2024-12-09 05:24:17.305379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.404 [2024-12-09 05:24:17.305389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.404 [2024-12-09 05:24:17.305395] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305402] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=3072, cccid=4 00:31:03.404 [2024-12-09 05:24:17.305409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=3072 00:31:03.404 [2024-12-09 05:24:17.305416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305436] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305442] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.404 [2024-12-09 05:24:17.305645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.404 [2024-12-09 05:24:17.305650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.404 [2024-12-09 05:24:17.305683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.305695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.404 [2024-12-09 05:24:17.305708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.404 [2024-12-09 05:24:17.305730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.404 [2024-12-09 05:24:17.305986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.404 [2024-12-09 05:24:17.306000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.404 [2024-12-09 05:24:17.306006] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.306012] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8, cccid=4 00:31:03.404 [2024-12-09 05:24:17.306019] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=8 00:31:03.404 [2024-12-09 05:24:17.306026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.306036] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.306042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.350841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.404 [2024-12-09 05:24:17.350874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.404 [2024-12-09 05:24:17.350880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.404 [2024-12-09 05:24:17.350887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.404 ===================================================== 00:31:03.404 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:03.404 ===================================================== 00:31:03.404 Controller Capabilities/Features 00:31:03.405 ================================ 00:31:03.405 Vendor ID: 0000 00:31:03.405 Subsystem Vendor ID: 0000 00:31:03.405 Serial Number: .................... 00:31:03.405 Model Number: ........................................ 00:31:03.405 Firmware Version: 25.01 00:31:03.405 Recommended Arb Burst: 0 00:31:03.405 IEEE OUI Identifier: 00 00 00 00:31:03.405 Multi-path I/O 00:31:03.405 May have multiple subsystem ports: No 00:31:03.405 May have multiple controllers: No 00:31:03.405 Associated with SR-IOV VF: No 00:31:03.405 Max Data Transfer Size: 131072 00:31:03.405 Max Number of Namespaces: 0 00:31:03.405 Max Number of I/O Queues: 1024 00:31:03.405 NVMe Specification Version (VS): 1.3 00:31:03.405 NVMe Specification Version (Identify): 1.3 00:31:03.405 Maximum Queue Entries: 128 00:31:03.405 Contiguous Queues Required: Yes 00:31:03.405 Arbitration Mechanisms Supported 00:31:03.405 Weighted Round Robin: Not Supported 00:31:03.405 Vendor Specific: Not Supported 00:31:03.405 Reset Timeout: 15000 ms 00:31:03.405 Doorbell Stride: 4 bytes 00:31:03.405 NVM Subsystem Reset: Not Supported 00:31:03.405 Command Sets Supported 00:31:03.405 NVM Command Set: Supported 00:31:03.405 Boot Partition: Not Supported 00:31:03.405 Memory Page Size Minimum: 4096 bytes 00:31:03.405 Memory Page Size Maximum: 4096 bytes 00:31:03.405 Persistent Memory Region: Not Supported 00:31:03.405 Optional Asynchronous Events Supported 00:31:03.405 Namespace Attribute Notices: Not Supported 00:31:03.405 Firmware Activation Notices: Not Supported 00:31:03.405 ANA Change Notices: Not Supported 00:31:03.405 PLE Aggregate Log Change Notices: Not Supported 00:31:03.405 LBA Status Info Alert Notices: Not Supported 00:31:03.405 EGE Aggregate Log Change Notices: Not Supported 00:31:03.405 Normal NVM Subsystem Shutdown event: Not Supported 00:31:03.405 Zone Descriptor Change Notices: Not Supported 00:31:03.405 Discovery Log Change Notices: Supported 00:31:03.405 Controller Attributes 00:31:03.405 128-bit Host Identifier: Not Supported 00:31:03.405 Non-Operational Permissive Mode: Not Supported 00:31:03.405 NVM Sets: Not Supported 00:31:03.405 Read Recovery Levels: Not Supported 00:31:03.405 Endurance Groups: Not Supported 00:31:03.405 Predictable Latency Mode: Not Supported 00:31:03.405 Traffic Based Keep ALive: Not Supported 00:31:03.405 Namespace Granularity: Not Supported 00:31:03.405 SQ Associations: Not Supported 00:31:03.405 UUID List: Not Supported 00:31:03.405 Multi-Domain Subsystem: Not Supported 00:31:03.405 Fixed Capacity Management: Not Supported 00:31:03.405 Variable Capacity Management: Not Supported 00:31:03.405 Delete Endurance Group: Not Supported 00:31:03.405 Delete NVM Set: Not Supported 00:31:03.405 Extended LBA Formats Supported: Not Supported 00:31:03.405 Flexible Data Placement Supported: Not Supported 00:31:03.405 00:31:03.405 Controller Memory Buffer Support 00:31:03.405 ================================ 00:31:03.405 Supported: No 00:31:03.405 00:31:03.405 Persistent Memory Region Support 00:31:03.405 ================================ 00:31:03.405 Supported: No 00:31:03.405 00:31:03.405 Admin Command Set Attributes 00:31:03.405 ============================ 00:31:03.405 Security Send/Receive: Not Supported 00:31:03.405 Format NVM: Not Supported 00:31:03.405 Firmware Activate/Download: Not Supported 00:31:03.405 Namespace Management: Not Supported 00:31:03.405 Device Self-Test: Not Supported 00:31:03.405 Directives: Not Supported 00:31:03.405 NVMe-MI: Not Supported 00:31:03.405 Virtualization Management: Not Supported 00:31:03.405 Doorbell Buffer Config: Not Supported 00:31:03.405 Get LBA Status Capability: Not Supported 00:31:03.405 Command & Feature Lockdown Capability: Not Supported 00:31:03.405 Abort Command Limit: 1 00:31:03.405 Async Event Request Limit: 4 00:31:03.405 Number of Firmware Slots: N/A 00:31:03.405 Firmware Slot 1 Read-Only: N/A 00:31:03.405 Firmware Activation Without Reset: N/A 00:31:03.405 Multiple Update Detection Support: N/A 00:31:03.405 Firmware Update Granularity: No Information Provided 00:31:03.405 Per-Namespace SMART Log: No 00:31:03.405 Asymmetric Namespace Access Log Page: Not Supported 00:31:03.405 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:03.405 Command Effects Log Page: Not Supported 00:31:03.405 Get Log Page Extended Data: Supported 00:31:03.405 Telemetry Log Pages: Not Supported 00:31:03.405 Persistent Event Log Pages: Not Supported 00:31:03.405 Supported Log Pages Log Page: May Support 00:31:03.405 Commands Supported & Effects Log Page: Not Supported 00:31:03.405 Feature Identifiers & Effects Log Page:May Support 00:31:03.405 NVMe-MI Commands & Effects Log Page: May Support 00:31:03.405 Data Area 4 for Telemetry Log: Not Supported 00:31:03.405 Error Log Page Entries Supported: 128 00:31:03.405 Keep Alive: Not Supported 00:31:03.405 00:31:03.405 NVM Command Set Attributes 00:31:03.405 ========================== 00:31:03.405 Submission Queue Entry Size 00:31:03.405 Max: 1 00:31:03.405 Min: 1 00:31:03.405 Completion Queue Entry Size 00:31:03.405 Max: 1 00:31:03.405 Min: 1 00:31:03.405 Number of Namespaces: 0 00:31:03.405 Compare Command: Not Supported 00:31:03.405 Write Uncorrectable Command: Not Supported 00:31:03.405 Dataset Management Command: Not Supported 00:31:03.405 Write Zeroes Command: Not Supported 00:31:03.405 Set Features Save Field: Not Supported 00:31:03.405 Reservations: Not Supported 00:31:03.405 Timestamp: Not Supported 00:31:03.405 Copy: Not Supported 00:31:03.405 Volatile Write Cache: Not Present 00:31:03.405 Atomic Write Unit (Normal): 1 00:31:03.405 Atomic Write Unit (PFail): 1 00:31:03.405 Atomic Compare & Write Unit: 1 00:31:03.405 Fused Compare & Write: Supported 00:31:03.405 Scatter-Gather List 00:31:03.405 SGL Command Set: Supported 00:31:03.405 SGL Keyed: Supported 00:31:03.405 SGL Bit Bucket Descriptor: Not Supported 00:31:03.405 SGL Metadata Pointer: Not Supported 00:31:03.405 Oversized SGL: Not Supported 00:31:03.405 SGL Metadata Address: Not Supported 00:31:03.405 SGL Offset: Supported 00:31:03.405 Transport SGL Data Block: Not Supported 00:31:03.405 Replay Protected Memory Block: Not Supported 00:31:03.405 00:31:03.405 Firmware Slot Information 00:31:03.405 ========================= 00:31:03.405 Active slot: 0 00:31:03.405 00:31:03.405 00:31:03.405 Error Log 00:31:03.405 ========= 00:31:03.405 00:31:03.405 Active Namespaces 00:31:03.405 ================= 00:31:03.405 Discovery Log Page 00:31:03.405 ================== 00:31:03.405 Generation Counter: 2 00:31:03.405 Number of Records: 2 00:31:03.405 Record Format: 0 00:31:03.405 00:31:03.405 Discovery Log Entry 0 00:31:03.405 ---------------------- 00:31:03.405 Transport Type: 3 (TCP) 00:31:03.405 Address Family: 1 (IPv4) 00:31:03.405 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:03.405 Entry Flags: 00:31:03.405 Duplicate Returned Information: 1 00:31:03.405 Explicit Persistent Connection Support for Discovery: 1 00:31:03.405 Transport Requirements: 00:31:03.405 Secure Channel: Not Required 00:31:03.405 Port ID: 0 (0x0000) 00:31:03.405 Controller ID: 65535 (0xffff) 00:31:03.405 Admin Max SQ Size: 128 00:31:03.405 Transport Service Identifier: 4420 00:31:03.405 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:03.405 Transport Address: 10.0.0.2 00:31:03.405 Discovery Log Entry 1 00:31:03.405 ---------------------- 00:31:03.405 Transport Type: 3 (TCP) 00:31:03.405 Address Family: 1 (IPv4) 00:31:03.405 Subsystem Type: 2 (NVM Subsystem) 00:31:03.405 Entry Flags: 00:31:03.405 Duplicate Returned Information: 0 00:31:03.405 Explicit Persistent Connection Support for Discovery: 0 00:31:03.405 Transport Requirements: 00:31:03.405 Secure Channel: Not Required 00:31:03.405 Port ID: 0 (0x0000) 00:31:03.405 Controller ID: 65535 (0xffff) 00:31:03.405 Admin Max SQ Size: 128 00:31:03.405 Transport Service Identifier: 4420 00:31:03.405 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:03.405 Transport Address: 10.0.0.2 [2024-12-09 05:24:17.351058] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:03.406 [2024-12-09 05:24:17.351078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.406 [2024-12-09 05:24:17.351102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.406 [2024-12-09 05:24:17.351119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.406 [2024-12-09 05:24:17.351134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.406 [2024-12-09 05:24:17.351158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.351193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.351229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.351512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.351523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.351530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.351581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.351606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.351847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.351858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.351863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.351878] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:03.406 [2024-12-09 05:24:17.351887] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:03.406 [2024-12-09 05:24:17.351905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.351919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.351931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.351947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.352168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.352177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.352183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.352204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.352227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.352241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.352473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.352482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.352487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.352508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.352530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.352544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.352774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.352783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.352788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.352809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.352831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.352841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.352855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.353088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.353098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.353103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.353123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.353145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.353159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.353379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.353388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.353394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.353413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.353436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.353449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.353679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.353688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.353693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.406 [2024-12-09 05:24:17.353713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.406 [2024-12-09 05:24:17.353725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.406 [2024-12-09 05:24:17.353739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.406 [2024-12-09 05:24:17.353753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.406 [2024-12-09 05:24:17.353985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.406 [2024-12-09 05:24:17.353996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.406 [2024-12-09 05:24:17.354001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.407 [2024-12-09 05:24:17.354021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.407 [2024-12-09 05:24:17.354054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.407 [2024-12-09 05:24:17.354068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.407 [2024-12-09 05:24:17.354292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.407 [2024-12-09 05:24:17.354301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.407 [2024-12-09 05:24:17.354307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.407 [2024-12-09 05:24:17.354326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.407 [2024-12-09 05:24:17.354349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.407 [2024-12-09 05:24:17.354362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.407 [2024-12-09 05:24:17.354591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.407 [2024-12-09 05:24:17.354600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.407 [2024-12-09 05:24:17.354605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.407 [2024-12-09 05:24:17.354625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.354637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.407 [2024-12-09 05:24:17.354647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.407 [2024-12-09 05:24:17.354660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.407 [2024-12-09 05:24:17.358835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.407 [2024-12-09 05:24:17.358860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.407 [2024-12-09 05:24:17.358866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.358873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.407 [2024-12-09 05:24:17.358894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.358901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.358908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.407 [2024-12-09 05:24:17.358921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.407 [2024-12-09 05:24:17.358946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.407 [2024-12-09 05:24:17.359232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.407 [2024-12-09 05:24:17.359242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.407 [2024-12-09 05:24:17.359247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.407 [2024-12-09 05:24:17.359253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.407 [2024-12-09 05:24:17.359266] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:31:03.671 00:31:03.671 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:03.671 [2024-12-09 05:24:17.476020] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:03.671 [2024-12-09 05:24:17.476113] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716642 ] 00:31:03.671 [2024-12-09 05:24:17.552661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:03.671 [2024-12-09 05:24:17.552782] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:03.671 [2024-12-09 05:24:17.552795] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:03.671 [2024-12-09 05:24:17.556827] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:03.671 [2024-12-09 05:24:17.556858] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:03.671 [2024-12-09 05:24:17.557855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:03.671 [2024-12-09 05:24:17.557918] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:03.671 [2024-12-09 05:24:17.571849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:03.671 [2024-12-09 05:24:17.571883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:03.671 [2024-12-09 05:24:17.571893] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:03.671 [2024-12-09 05:24:17.571901] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:03.671 [2024-12-09 05:24:17.571966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.571980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.571989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.671 [2024-12-09 05:24:17.572017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:03.671 [2024-12-09 05:24:17.572053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.671 [2024-12-09 05:24:17.579846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.671 [2024-12-09 05:24:17.579877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.671 [2024-12-09 05:24:17.579890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.579900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.671 [2024-12-09 05:24:17.579924] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:03.671 [2024-12-09 05:24:17.579944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:03.671 [2024-12-09 05:24:17.579955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:03.671 [2024-12-09 05:24:17.579980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.579992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.579999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.671 [2024-12-09 05:24:17.580017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.671 [2024-12-09 05:24:17.580046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.671 [2024-12-09 05:24:17.580310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.671 [2024-12-09 05:24:17.580322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.671 [2024-12-09 05:24:17.580329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.671 [2024-12-09 05:24:17.580337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.671 [2024-12-09 05:24:17.580352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:03.671 [2024-12-09 05:24:17.580368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:03.672 [2024-12-09 05:24:17.580381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.580389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.580397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.580413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.580433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.580682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.580691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.580697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.580703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.580713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:03.672 [2024-12-09 05:24:17.580728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.580739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.580751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.580760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.580772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.580789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.581029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.581039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.581045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.581060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.581075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.581112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.581128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.581358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.581368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.581373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.581388] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:03.672 [2024-12-09 05:24:17.581397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.581412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.581525] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:03.672 [2024-12-09 05:24:17.581533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.581555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.581582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.581598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.581839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.581849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.581855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.581869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:03.672 [2024-12-09 05:24:17.581885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.581901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.581915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.581930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.582125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.582135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.582140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.582155] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:03.672 [2024-12-09 05:24:17.582168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:03.672 [2024-12-09 05:24:17.582181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:03.672 [2024-12-09 05:24:17.582197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:03.672 [2024-12-09 05:24:17.582216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.582236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.672 [2024-12-09 05:24:17.582254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.582552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.672 [2024-12-09 05:24:17.582563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.672 [2024-12-09 05:24:17.582568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:03.672 [2024-12-09 05:24:17.582588] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.672 [2024-12-09 05:24:17.582597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582612] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.582752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.582757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.582781] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:03.672 [2024-12-09 05:24:17.582790] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:03.672 [2024-12-09 05:24:17.582801] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:03.672 [2024-12-09 05:24:17.582812] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:03.672 [2024-12-09 05:24:17.582834] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:03.672 [2024-12-09 05:24:17.582842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:03.672 [2024-12-09 05:24:17.582858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:03.672 [2024-12-09 05:24:17.582873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.582889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.582903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:03.672 [2024-12-09 05:24:17.582921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.672 [2024-12-09 05:24:17.583137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.672 [2024-12-09 05:24:17.583147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.672 [2024-12-09 05:24:17.583156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.583162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.672 [2024-12-09 05:24:17.583173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.583181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.583191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.583204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.672 [2024-12-09 05:24:17.583214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.583220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.672 [2024-12-09 05:24:17.583228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:03.672 [2024-12-09 05:24:17.583238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.673 [2024-12-09 05:24:17.583246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.583267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.673 [2024-12-09 05:24:17.583275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.583296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.673 [2024-12-09 05:24:17.583304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.583319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.583331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.583350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.673 [2024-12-09 05:24:17.583368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:03.673 [2024-12-09 05:24:17.583376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:03.673 [2024-12-09 05:24:17.583383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:03.673 [2024-12-09 05:24:17.583390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.673 [2024-12-09 05:24:17.583397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.673 [2024-12-09 05:24:17.583696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.673 [2024-12-09 05:24:17.583705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.673 [2024-12-09 05:24:17.583716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.673 [2024-12-09 05:24:17.583731] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:03.673 [2024-12-09 05:24:17.583747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.583762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.583771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.583781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.583795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.583807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:03.673 [2024-12-09 05:24:17.587847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.673 [2024-12-09 05:24:17.588110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.673 [2024-12-09 05:24:17.588123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.673 [2024-12-09 05:24:17.588129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.673 [2024-12-09 05:24:17.588230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.588255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.588272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.588297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.673 [2024-12-09 05:24:17.588319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.673 [2024-12-09 05:24:17.588554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.673 [2024-12-09 05:24:17.588564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.673 [2024-12-09 05:24:17.588570] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588577] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:03.673 [2024-12-09 05:24:17.588584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.673 [2024-12-09 05:24:17.588592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588630] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.673 [2024-12-09 05:24:17.588798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.673 [2024-12-09 05:24:17.588803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.673 [2024-12-09 05:24:17.588845] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:03.673 [2024-12-09 05:24:17.588869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.588888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.588903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.588910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.588924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.673 [2024-12-09 05:24:17.588944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.673 [2024-12-09 05:24:17.589231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.673 [2024-12-09 05:24:17.589241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.673 [2024-12-09 05:24:17.589246] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:03.673 [2024-12-09 05:24:17.589260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.673 [2024-12-09 05:24:17.589273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589318] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.673 [2024-12-09 05:24:17.589480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.673 [2024-12-09 05:24:17.589486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.673 [2024-12-09 05:24:17.589514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.589529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.589547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.673 [2024-12-09 05:24:17.589567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.673 [2024-12-09 05:24:17.589586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.673 [2024-12-09 05:24:17.589834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.673 [2024-12-09 05:24:17.589844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.673 [2024-12-09 05:24:17.589849] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589856] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:03.673 [2024-12-09 05:24:17.589863] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.673 [2024-12-09 05:24:17.589869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589896] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.589903] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.590058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.673 [2024-12-09 05:24:17.590067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.673 [2024-12-09 05:24:17.590073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.673 [2024-12-09 05:24:17.590083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.673 [2024-12-09 05:24:17.590102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.590115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.590127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.590139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.590148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:03.673 [2024-12-09 05:24:17.590157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:03.674 [2024-12-09 05:24:17.590166] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:03.674 [2024-12-09 05:24:17.590175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:03.674 [2024-12-09 05:24:17.590183] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:03.674 [2024-12-09 05:24:17.590224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.590245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.590256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.590283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.674 [2024-12-09 05:24:17.590302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.674 [2024-12-09 05:24:17.590311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.674 [2024-12-09 05:24:17.590563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.590574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.590580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.590602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.590610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.590617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.590638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.590655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.590669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.674 [2024-12-09 05:24:17.590900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.590910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.590916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.590935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.590942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.590952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.590969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.674 [2024-12-09 05:24:17.591178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.591187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.591195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.591214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.591231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.591244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.674 [2024-12-09 05:24:17.591463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.591472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.591478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.591510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.591530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.591544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.591564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.591576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.591594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.591609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.591618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:03.674 [2024-12-09 05:24:17.591629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.674 [2024-12-09 05:24:17.591648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:03.674 [2024-12-09 05:24:17.591660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:03.674 [2024-12-09 05:24:17.591667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:03.674 [2024-12-09 05:24:17.591674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:03.674 [2024-12-09 05:24:17.595846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.674 [2024-12-09 05:24:17.595872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.674 [2024-12-09 05:24:17.595880] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595887] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8192, cccid=5 00:31:03.674 [2024-12-09 05:24:17.595896] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025600): expected_datao=0, payload_size=8192 00:31:03.674 [2024-12-09 05:24:17.595904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595918] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595925] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.674 [2024-12-09 05:24:17.595955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.674 [2024-12-09 05:24:17.595960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595966] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=4 00:31:03.674 [2024-12-09 05:24:17.595974] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:03.674 [2024-12-09 05:24:17.595980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.595995] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.674 [2024-12-09 05:24:17.596011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.674 [2024-12-09 05:24:17.596016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=6 00:31:03.674 [2024-12-09 05:24:17.596029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:03.674 [2024-12-09 05:24:17.596037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596047] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596052] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:03.674 [2024-12-09 05:24:17.596069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:03.674 [2024-12-09 05:24:17.596074] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=7 00:31:03.674 [2024-12-09 05:24:17.596087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:03.674 [2024-12-09 05:24:17.596093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596103] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.596110] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.633062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.633096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.633108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.633117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.633151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.633167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.674 [2024-12-09 05:24:17.633173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.674 [2024-12-09 05:24:17.633179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:03.674 [2024-12-09 05:24:17.633200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.674 [2024-12-09 05:24:17.633208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.675 [2024-12-09 05:24:17.633213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.675 [2024-12-09 05:24:17.633219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025600 00:31:03.675 [2024-12-09 05:24:17.633231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.675 [2024-12-09 05:24:17.633242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.675 [2024-12-09 05:24:17.633248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.675 [2024-12-09 05:24:17.633254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:03.675 ===================================================== 00:31:03.675 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.675 ===================================================== 00:31:03.675 Controller Capabilities/Features 00:31:03.675 ================================ 00:31:03.675 Vendor ID: 8086 00:31:03.675 Subsystem Vendor ID: 8086 00:31:03.675 Serial Number: SPDK00000000000001 00:31:03.675 Model Number: SPDK bdev Controller 00:31:03.675 Firmware Version: 25.01 00:31:03.675 Recommended Arb Burst: 6 00:31:03.675 IEEE OUI Identifier: e4 d2 5c 00:31:03.675 Multi-path I/O 00:31:03.675 May have multiple subsystem ports: Yes 00:31:03.675 May have multiple controllers: Yes 00:31:03.675 Associated with SR-IOV VF: No 00:31:03.675 Max Data Transfer Size: 131072 00:31:03.675 Max Number of Namespaces: 32 00:31:03.675 Max Number of I/O Queues: 127 00:31:03.675 NVMe Specification Version (VS): 1.3 00:31:03.675 NVMe Specification Version (Identify): 1.3 00:31:03.675 Maximum Queue Entries: 128 00:31:03.675 Contiguous Queues Required: Yes 00:31:03.675 Arbitration Mechanisms Supported 00:31:03.675 Weighted Round Robin: Not Supported 00:31:03.675 Vendor Specific: Not Supported 00:31:03.675 Reset Timeout: 15000 ms 00:31:03.675 Doorbell Stride: 4 bytes 00:31:03.675 NVM Subsystem Reset: Not Supported 00:31:03.675 Command Sets Supported 00:31:03.675 NVM Command Set: Supported 00:31:03.675 Boot Partition: Not Supported 00:31:03.675 Memory Page Size Minimum: 4096 bytes 00:31:03.675 Memory Page Size Maximum: 4096 bytes 00:31:03.675 Persistent Memory Region: Not Supported 00:31:03.675 Optional Asynchronous Events Supported 00:31:03.675 Namespace Attribute Notices: Supported 00:31:03.675 Firmware Activation Notices: Not Supported 00:31:03.675 ANA Change Notices: Not Supported 00:31:03.675 PLE Aggregate Log Change Notices: Not Supported 00:31:03.675 LBA Status Info Alert Notices: Not Supported 00:31:03.675 EGE Aggregate Log Change Notices: Not Supported 00:31:03.675 Normal NVM Subsystem Shutdown event: Not Supported 00:31:03.675 Zone Descriptor Change Notices: Not Supported 00:31:03.675 Discovery Log Change Notices: Not Supported 00:31:03.675 Controller Attributes 00:31:03.675 128-bit Host Identifier: Supported 00:31:03.675 Non-Operational Permissive Mode: Not Supported 00:31:03.675 NVM Sets: Not Supported 00:31:03.675 Read Recovery Levels: Not Supported 00:31:03.675 Endurance Groups: Not Supported 00:31:03.675 Predictable Latency Mode: Not Supported 00:31:03.675 Traffic Based Keep ALive: Not Supported 00:31:03.675 Namespace Granularity: Not Supported 00:31:03.675 SQ Associations: Not Supported 00:31:03.675 UUID List: Not Supported 00:31:03.675 Multi-Domain Subsystem: Not Supported 00:31:03.675 Fixed Capacity Management: Not Supported 00:31:03.675 Variable Capacity Management: Not Supported 00:31:03.675 Delete Endurance Group: Not Supported 00:31:03.675 Delete NVM Set: Not Supported 00:31:03.675 Extended LBA Formats Supported: Not Supported 00:31:03.675 Flexible Data Placement Supported: Not Supported 00:31:03.675 00:31:03.675 Controller Memory Buffer Support 00:31:03.675 ================================ 00:31:03.675 Supported: No 00:31:03.675 00:31:03.675 Persistent Memory Region Support 00:31:03.675 ================================ 00:31:03.675 Supported: No 00:31:03.675 00:31:03.675 Admin Command Set Attributes 00:31:03.675 ============================ 00:31:03.675 Security Send/Receive: Not Supported 00:31:03.675 Format NVM: Not Supported 00:31:03.675 Firmware Activate/Download: Not Supported 00:31:03.675 Namespace Management: Not Supported 00:31:03.675 Device Self-Test: Not Supported 00:31:03.675 Directives: Not Supported 00:31:03.675 NVMe-MI: Not Supported 00:31:03.675 Virtualization Management: Not Supported 00:31:03.675 Doorbell Buffer Config: Not Supported 00:31:03.675 Get LBA Status Capability: Not Supported 00:31:03.675 Command & Feature Lockdown Capability: Not Supported 00:31:03.675 Abort Command Limit: 4 00:31:03.675 Async Event Request Limit: 4 00:31:03.675 Number of Firmware Slots: N/A 00:31:03.675 Firmware Slot 1 Read-Only: N/A 00:31:03.675 Firmware Activation Without Reset: N/A 00:31:03.675 Multiple Update Detection Support: N/A 00:31:03.675 Firmware Update Granularity: No Information Provided 00:31:03.675 Per-Namespace SMART Log: No 00:31:03.675 Asymmetric Namespace Access Log Page: Not Supported 00:31:03.675 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:03.675 Command Effects Log Page: Supported 00:31:03.675 Get Log Page Extended Data: Supported 00:31:03.675 Telemetry Log Pages: Not Supported 00:31:03.675 Persistent Event Log Pages: Not Supported 00:31:03.675 Supported Log Pages Log Page: May Support 00:31:03.675 Commands Supported & Effects Log Page: Not Supported 00:31:03.675 Feature Identifiers & Effects Log Page:May Support 00:31:03.675 NVMe-MI Commands & Effects Log Page: May Support 00:31:03.675 Data Area 4 for Telemetry Log: Not Supported 00:31:03.675 Error Log Page Entries Supported: 128 00:31:03.675 Keep Alive: Supported 00:31:03.675 Keep Alive Granularity: 10000 ms 00:31:03.675 00:31:03.675 NVM Command Set Attributes 00:31:03.675 ========================== 00:31:03.675 Submission Queue Entry Size 00:31:03.675 Max: 64 00:31:03.675 Min: 64 00:31:03.675 Completion Queue Entry Size 00:31:03.675 Max: 16 00:31:03.675 Min: 16 00:31:03.675 Number of Namespaces: 32 00:31:03.675 Compare Command: Supported 00:31:03.675 Write Uncorrectable Command: Not Supported 00:31:03.675 Dataset Management Command: Supported 00:31:03.675 Write Zeroes Command: Supported 00:31:03.675 Set Features Save Field: Not Supported 00:31:03.675 Reservations: Supported 00:31:03.675 Timestamp: Not Supported 00:31:03.675 Copy: Supported 00:31:03.675 Volatile Write Cache: Present 00:31:03.675 Atomic Write Unit (Normal): 1 00:31:03.675 Atomic Write Unit (PFail): 1 00:31:03.675 Atomic Compare & Write Unit: 1 00:31:03.675 Fused Compare & Write: Supported 00:31:03.675 Scatter-Gather List 00:31:03.675 SGL Command Set: Supported 00:31:03.675 SGL Keyed: Supported 00:31:03.675 SGL Bit Bucket Descriptor: Not Supported 00:31:03.675 SGL Metadata Pointer: Not Supported 00:31:03.675 Oversized SGL: Not Supported 00:31:03.675 SGL Metadata Address: Not Supported 00:31:03.675 SGL Offset: Supported 00:31:03.675 Transport SGL Data Block: Not Supported 00:31:03.675 Replay Protected Memory Block: Not Supported 00:31:03.675 00:31:03.675 Firmware Slot Information 00:31:03.675 ========================= 00:31:03.675 Active slot: 1 00:31:03.675 Slot 1 Firmware Revision: 25.01 00:31:03.675 00:31:03.675 00:31:03.675 Commands Supported and Effects 00:31:03.675 ============================== 00:31:03.675 Admin Commands 00:31:03.675 -------------- 00:31:03.675 Get Log Page (02h): Supported 00:31:03.675 Identify (06h): Supported 00:31:03.675 Abort (08h): Supported 00:31:03.675 Set Features (09h): Supported 00:31:03.675 Get Features (0Ah): Supported 00:31:03.675 Asynchronous Event Request (0Ch): Supported 00:31:03.675 Keep Alive (18h): Supported 00:31:03.675 I/O Commands 00:31:03.675 ------------ 00:31:03.675 Flush (00h): Supported LBA-Change 00:31:03.675 Write (01h): Supported LBA-Change 00:31:03.675 Read (02h): Supported 00:31:03.675 Compare (05h): Supported 00:31:03.675 Write Zeroes (08h): Supported LBA-Change 00:31:03.675 Dataset Management (09h): Supported LBA-Change 00:31:03.675 Copy (19h): Supported LBA-Change 00:31:03.675 00:31:03.675 Error Log 00:31:03.675 ========= 00:31:03.675 00:31:03.675 Arbitration 00:31:03.675 =========== 00:31:03.675 Arbitration Burst: 1 00:31:03.675 00:31:03.675 Power Management 00:31:03.675 ================ 00:31:03.675 Number of Power States: 1 00:31:03.675 Current Power State: Power State #0 00:31:03.675 Power State #0: 00:31:03.675 Max Power: 0.00 W 00:31:03.675 Non-Operational State: Operational 00:31:03.675 Entry Latency: Not Reported 00:31:03.675 Exit Latency: Not Reported 00:31:03.675 Relative Read Throughput: 0 00:31:03.675 Relative Read Latency: 0 00:31:03.676 Relative Write Throughput: 0 00:31:03.676 Relative Write Latency: 0 00:31:03.676 Idle Power: Not Reported 00:31:03.676 Active Power: Not Reported 00:31:03.676 Non-Operational Permissive Mode: Not Supported 00:31:03.676 00:31:03.676 Health Information 00:31:03.676 ================== 00:31:03.676 Critical Warnings: 00:31:03.676 Available Spare Space: OK 00:31:03.676 Temperature: OK 00:31:03.676 Device Reliability: OK 00:31:03.676 Read Only: No 00:31:03.676 Volatile Memory Backup: OK 00:31:03.676 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:03.676 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:03.676 Available Spare: 0% 00:31:03.676 Available Spare Threshold: 0% 00:31:03.676 Life Percentage Used:[2024-12-09 05:24:17.633434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.633444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:03.676 [2024-12-09 05:24:17.633460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.676 [2024-12-09 05:24:17.633485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:03.676 [2024-12-09 05:24:17.633732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.676 [2024-12-09 05:24:17.633743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.676 [2024-12-09 05:24:17.633749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.633756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.633822] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:03.676 [2024-12-09 05:24:17.633842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.633854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.676 [2024-12-09 05:24:17.633864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.633873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.676 [2024-12-09 05:24:17.633880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.633888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.676 [2024-12-09 05:24:17.633896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.633904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.676 [2024-12-09 05:24:17.633917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.633926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.633933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.676 [2024-12-09 05:24:17.633952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.676 [2024-12-09 05:24:17.633974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.676 [2024-12-09 05:24:17.634216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.676 [2024-12-09 05:24:17.634226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.676 [2024-12-09 05:24:17.634232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.634256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.676 [2024-12-09 05:24:17.634283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.676 [2024-12-09 05:24:17.634303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.676 [2024-12-09 05:24:17.634513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.676 [2024-12-09 05:24:17.634523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.676 [2024-12-09 05:24:17.634528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.634543] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:03.676 [2024-12-09 05:24:17.634552] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:03.676 [2024-12-09 05:24:17.634567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.634587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.676 [2024-12-09 05:24:17.634600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.676 [2024-12-09 05:24:17.634615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.676 [2024-12-09 05:24:17.638834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.676 [2024-12-09 05:24:17.638860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.676 [2024-12-09 05:24:17.638867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.638874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.638899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.638906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.638913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:03.676 [2024-12-09 05:24:17.638927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.676 [2024-12-09 05:24:17.638954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:03.676 [2024-12-09 05:24:17.639199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:03.676 [2024-12-09 05:24:17.639214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:03.676 [2024-12-09 05:24:17.639220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:03.676 [2024-12-09 05:24:17.639226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:03.676 [2024-12-09 05:24:17.639240] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:31:03.939 0% 00:31:03.939 Data Units Read: 0 00:31:03.939 Data Units Written: 0 00:31:03.939 Host Read Commands: 0 00:31:03.939 Host Write Commands: 0 00:31:03.939 Controller Busy Time: 0 minutes 00:31:03.939 Power Cycles: 0 00:31:03.939 Power On Hours: 0 hours 00:31:03.939 Unsafe Shutdowns: 0 00:31:03.939 Unrecoverable Media Errors: 0 00:31:03.939 Lifetime Error Log Entries: 0 00:31:03.939 Warning Temperature Time: 0 minutes 00:31:03.939 Critical Temperature Time: 0 minutes 00:31:03.939 00:31:03.939 Number of Queues 00:31:03.939 ================ 00:31:03.939 Number of I/O Submission Queues: 127 00:31:03.939 Number of I/O Completion Queues: 127 00:31:03.939 00:31:03.939 Active Namespaces 00:31:03.939 ================= 00:31:03.939 Namespace ID:1 00:31:03.939 Error Recovery Timeout: Unlimited 00:31:03.939 Command Set Identifier: NVM (00h) 00:31:03.939 Deallocate: Supported 00:31:03.939 Deallocated/Unwritten Error: Not Supported 00:31:03.939 Deallocated Read Value: Unknown 00:31:03.939 Deallocate in Write Zeroes: Not Supported 00:31:03.939 Deallocated Guard Field: 0xFFFF 00:31:03.939 Flush: Supported 00:31:03.939 Reservation: Supported 00:31:03.939 Namespace Sharing Capabilities: Multiple Controllers 00:31:03.939 Size (in LBAs): 131072 (0GiB) 00:31:03.939 Capacity (in LBAs): 131072 (0GiB) 00:31:03.939 Utilization (in LBAs): 131072 (0GiB) 00:31:03.939 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:03.939 EUI64: ABCDEF0123456789 00:31:03.939 UUID: dc374080-fd92-4ca4-b253-335727c4dc35 00:31:03.939 Thin Provisioning: Not Supported 00:31:03.939 Per-NS Atomic Units: Yes 00:31:03.939 Atomic Boundary Size (Normal): 0 00:31:03.939 Atomic Boundary Size (PFail): 0 00:31:03.939 Atomic Boundary Offset: 0 00:31:03.939 Maximum Single Source Range Length: 65535 00:31:03.939 Maximum Copy Length: 65535 00:31:03.939 Maximum Source Range Count: 1 00:31:03.939 NGUID/EUI64 Never Reused: No 00:31:03.939 Namespace Write Protected: No 00:31:03.939 Number of LBA Formats: 1 00:31:03.940 Current LBA Format: LBA Format #00 00:31:03.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:03.940 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.940 rmmod nvme_tcp 00:31:03.940 rmmod nvme_fabrics 00:31:03.940 rmmod nvme_keyring 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1716306 ']' 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1716306 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1716306 ']' 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1716306 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716306 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716306' 00:31:03.940 killing process with pid 1716306 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1716306 00:31:03.940 05:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1716306 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.881 05:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.423 00:31:07.423 real 0m12.917s 00:31:07.423 user 0m11.574s 00:31:07.423 sys 0m6.528s 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:07.423 ************************************ 00:31:07.423 END TEST nvmf_identify 00:31:07.423 ************************************ 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.423 05:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.423 ************************************ 00:31:07.423 START TEST nvmf_perf 00:31:07.423 ************************************ 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:07.423 * Looking for test storage... 00:31:07.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.423 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:07.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.424 --rc genhtml_branch_coverage=1 00:31:07.424 --rc genhtml_function_coverage=1 00:31:07.424 --rc genhtml_legend=1 00:31:07.424 --rc geninfo_all_blocks=1 00:31:07.424 --rc geninfo_unexecuted_blocks=1 00:31:07.424 00:31:07.424 ' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:07.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.424 --rc genhtml_branch_coverage=1 00:31:07.424 --rc genhtml_function_coverage=1 00:31:07.424 --rc genhtml_legend=1 00:31:07.424 --rc geninfo_all_blocks=1 00:31:07.424 --rc geninfo_unexecuted_blocks=1 00:31:07.424 00:31:07.424 ' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:07.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.424 --rc genhtml_branch_coverage=1 00:31:07.424 --rc genhtml_function_coverage=1 00:31:07.424 --rc genhtml_legend=1 00:31:07.424 --rc geninfo_all_blocks=1 00:31:07.424 --rc geninfo_unexecuted_blocks=1 00:31:07.424 00:31:07.424 ' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:07.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.424 --rc genhtml_branch_coverage=1 00:31:07.424 --rc genhtml_function_coverage=1 00:31:07.424 --rc genhtml_legend=1 00:31:07.424 --rc geninfo_all_blocks=1 00:31:07.424 --rc geninfo_unexecuted_blocks=1 00:31:07.424 00:31:07.424 ' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.424 05:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.568 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:15.569 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:15.569 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:15.569 Found net devices under 0000:31:00.0: cvl_0_0 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:15.569 Found net devices under 0000:31:00.1: cvl_0_1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:31:15.569 00:31:15.569 --- 10.0.0.2 ping statistics --- 00:31:15.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.569 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:15.569 00:31:15.569 --- 10.0.0.1 ping statistics --- 00:31:15.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.569 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1721023 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1721023 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1721023 ']' 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.569 05:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 [2024-12-09 05:24:28.987601] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:15.570 [2024-12-09 05:24:28.987734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.570 [2024-12-09 05:24:29.152758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.570 [2024-12-09 05:24:29.280539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.570 [2024-12-09 05:24:29.280601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.570 [2024-12-09 05:24:29.280614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.570 [2024-12-09 05:24:29.280628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.570 [2024-12-09 05:24:29.280638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.570 [2024-12-09 05:24:29.283664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.570 [2024-12-09 05:24:29.283801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.570 [2024-12-09 05:24:29.283908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.570 [2024-12-09 05:24:29.283931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:15.831 05:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:16.403 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:16.403 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:16.663 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:16.663 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:16.924 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:16.924 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:16.924 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:16.924 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:16.924 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:16.924 [2024-12-09 05:24:30.913017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.184 05:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.184 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:17.184 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.445 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:17.445 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:17.706 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.706 [2024-12-09 05:24:31.655761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.706 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.967 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:17.967 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:17.967 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:17.967 05:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:19.347 Initializing NVMe Controllers 00:31:19.347 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:19.347 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:19.347 Initialization complete. Launching workers. 00:31:19.347 ======================================================== 00:31:19.347 Latency(us) 00:31:19.347 Device Information : IOPS MiB/s Average min max 00:31:19.347 PCIE (0000:65:00.0) NSID 1 from core 0: 72986.27 285.10 436.85 14.12 6069.89 00:31:19.347 ======================================================== 00:31:19.347 Total : 72986.27 285.10 436.85 14.12 6069.89 00:31:19.347 00:31:19.606 05:24:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.989 Initializing NVMe Controllers 00:31:20.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:20.989 Initialization complete. Launching workers. 00:31:20.989 ======================================================== 00:31:20.989 Latency(us) 00:31:20.990 Device Information : IOPS MiB/s Average min max 00:31:20.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.93 0.41 10017.01 241.82 45927.68 00:31:20.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.97 0.20 19548.69 7957.89 50880.39 00:31:20.990 ======================================================== 00:31:20.990 Total : 155.90 0.61 13194.24 241.82 50880.39 00:31:20.990 00:31:20.990 05:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:22.381 Initializing NVMe Controllers 00:31:22.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:22.381 Initialization complete. Launching workers. 00:31:22.381 ======================================================== 00:31:22.381 Latency(us) 00:31:22.381 Device Information : IOPS MiB/s Average min max 00:31:22.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10723.84 41.89 2984.71 450.51 6387.09 00:31:22.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3770.59 14.73 8531.22 4792.09 18705.09 00:31:22.381 ======================================================== 00:31:22.381 Total : 14494.43 56.62 4427.59 450.51 18705.09 00:31:22.381 00:31:22.381 05:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:22.381 05:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:22.381 05:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.928 Initializing NVMe Controllers 00:31:24.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.928 Controller IO queue size 128, less than required. 00:31:24.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.928 Controller IO queue size 128, less than required. 00:31:24.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:24.928 Initialization complete. Launching workers. 00:31:24.928 ======================================================== 00:31:24.928 Latency(us) 00:31:24.928 Device Information : IOPS MiB/s Average min max 00:31:24.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1633.25 408.31 80282.40 48759.05 167996.43 00:31:24.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.03 144.76 231135.52 79850.88 381491.08 00:31:24.928 ======================================================== 00:31:24.928 Total : 2212.28 553.07 119765.62 48759.05 381491.08 00:31:24.928 00:31:24.928 05:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:25.188 No valid NVMe controllers or AIO or URING devices found 00:31:25.188 Initializing NVMe Controllers 00:31:25.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.188 Controller IO queue size 128, less than required. 00:31:25.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.188 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:25.188 Controller IO queue size 128, less than required. 00:31:25.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.188 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:25.188 WARNING: Some requested NVMe devices were skipped 00:31:25.188 05:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:27.734 Initializing NVMe Controllers 00:31:27.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.734 Controller IO queue size 128, less than required. 00:31:27.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.734 Controller IO queue size 128, less than required. 00:31:27.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:27.734 Initialization complete. Launching workers. 00:31:27.734 00:31:27.734 ==================== 00:31:27.734 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:27.734 TCP transport: 00:31:27.734 polls: 23601 00:31:27.734 idle_polls: 11004 00:31:27.734 sock_completions: 12597 00:31:27.734 nvme_completions: 8063 00:31:27.734 submitted_requests: 12090 00:31:27.734 queued_requests: 1 00:31:27.734 00:31:27.734 ==================== 00:31:27.734 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:27.734 TCP transport: 00:31:27.734 polls: 28643 00:31:27.734 idle_polls: 17350 00:31:27.734 sock_completions: 11293 00:31:27.734 nvme_completions: 6475 00:31:27.734 submitted_requests: 9668 00:31:27.734 queued_requests: 1 00:31:27.734 ======================================================== 00:31:27.734 Latency(us) 00:31:27.734 Device Information : IOPS MiB/s Average min max 00:31:27.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2015.47 503.87 65157.43 37663.17 156009.62 00:31:27.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1618.47 404.62 81425.91 46764.13 253810.79 00:31:27.734 ======================================================== 00:31:27.734 Total : 3633.94 908.49 72403.04 37663.17 253810.79 00:31:27.734 00:31:27.994 05:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:27.994 05:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.994 05:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:27.994 05:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:27.994 05:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=1a64eca2-0f59-4c65-bf9f-a4467e0b0a37 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 1a64eca2-0f59-4c65-bf9f-a4467e0b0a37 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=1a64eca2-0f59-4c65-bf9f-a4467e0b0a37 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:29.380 { 00:31:29.380 "uuid": "1a64eca2-0f59-4c65-bf9f-a4467e0b0a37", 00:31:29.380 "name": "lvs_0", 00:31:29.380 "base_bdev": "Nvme0n1", 00:31:29.380 "total_data_clusters": 457407, 00:31:29.380 "free_clusters": 457407, 00:31:29.380 "block_size": 512, 00:31:29.380 "cluster_size": 4194304 00:31:29.380 } 00:31:29.380 ]' 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1a64eca2-0f59-4c65-bf9f-a4467e0b0a37") .free_clusters' 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=457407 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1a64eca2-0f59-4c65-bf9f-a4467e0b0a37") .cluster_size' 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1829628 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1829628 00:31:29.380 1829628 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:29.380 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a64eca2-0f59-4c65-bf9f-a4467e0b0a37 lbd_0 20480 00:31:29.640 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4a2d8ff6-25e7-4fb7-83c6-360f7a58d1fb 00:31:29.640 05:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4a2d8ff6-25e7-4fb7-83c6-360f7a58d1fb lvs_n_0 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8426cc79-39bb-4e0e-958f-6061381a60ec 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8426cc79-39bb-4e0e-958f-6061381a60ec 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8426cc79-39bb-4e0e-958f-6061381a60ec 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:31.551 { 00:31:31.551 "uuid": "1a64eca2-0f59-4c65-bf9f-a4467e0b0a37", 00:31:31.551 "name": "lvs_0", 00:31:31.551 "base_bdev": "Nvme0n1", 00:31:31.551 "total_data_clusters": 457407, 00:31:31.551 "free_clusters": 452287, 00:31:31.551 "block_size": 512, 00:31:31.551 "cluster_size": 4194304 00:31:31.551 }, 00:31:31.551 { 00:31:31.551 "uuid": "8426cc79-39bb-4e0e-958f-6061381a60ec", 00:31:31.551 "name": "lvs_n_0", 00:31:31.551 "base_bdev": "4a2d8ff6-25e7-4fb7-83c6-360f7a58d1fb", 00:31:31.551 "total_data_clusters": 5114, 00:31:31.551 "free_clusters": 5114, 00:31:31.551 "block_size": 512, 00:31:31.551 "cluster_size": 4194304 00:31:31.551 } 00:31:31.551 ]' 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8426cc79-39bb-4e0e-958f-6061381a60ec") .free_clusters' 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8426cc79-39bb-4e0e-958f-6061381a60ec") .cluster_size' 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:31.551 20456 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:31.551 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8426cc79-39bb-4e0e-958f-6061381a60ec lbd_nest_0 20456 00:31:31.811 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=87674570-19bd-4c0d-931a-d0447b6fc6f9 00:31:31.811 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.811 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:31.811 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 87674570-19bd-4c0d-931a-d0447b6fc6f9 00:31:32.072 05:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.333 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:32.333 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:32.333 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:32.333 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:32.333 05:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:44.564 Initializing NVMe Controllers 00:31:44.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:44.564 Initialization complete. Launching workers. 00:31:44.564 ======================================================== 00:31:44.564 Latency(us) 00:31:44.564 Device Information : IOPS MiB/s Average min max 00:31:44.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.10 0.02 21728.02 323.09 49156.37 00:31:44.564 ======================================================== 00:31:44.564 Total : 46.10 0.02 21728.02 323.09 49156.37 00:31:44.564 00:31:44.564 05:24:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.564 05:24:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.560 Initializing NVMe Controllers 00:31:54.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.560 Initialization complete. Launching workers. 00:31:54.560 ======================================================== 00:31:54.560 Latency(us) 00:31:54.560 Device Information : IOPS MiB/s Average min max 00:31:54.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.30 7.66 16321.60 5018.04 59869.72 00:31:54.560 ======================================================== 00:31:54.560 Total : 61.30 7.66 16321.60 5018.04 59869.72 00:31:54.560 00:31:54.560 05:25:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:54.560 05:25:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:54.560 05:25:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.560 Initializing NVMe Controllers 00:32:04.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.560 Initialization complete. Launching workers. 00:32:04.560 ======================================================== 00:32:04.560 Latency(us) 00:32:04.560 Device Information : IOPS MiB/s Average min max 00:32:04.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8445.95 4.12 3789.42 324.69 9616.16 00:32:04.560 ======================================================== 00:32:04.560 Total : 8445.95 4.12 3789.42 324.69 9616.16 00:32:04.560 00:32:04.560 05:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:04.560 05:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.565 Initializing NVMe Controllers 00:32:14.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:14.565 Initialization complete. Launching workers. 00:32:14.565 ======================================================== 00:32:14.565 Latency(us) 00:32:14.565 Device Information : IOPS MiB/s Average min max 00:32:14.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3472.88 434.11 9214.42 482.04 23169.28 00:32:14.565 ======================================================== 00:32:14.565 Total : 3472.88 434.11 9214.42 482.04 23169.28 00:32:14.565 00:32:14.565 05:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:14.565 05:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:14.565 05:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:24.560 Initializing NVMe Controllers 00:32:24.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.560 Controller IO queue size 128, less than required. 00:32:24.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.560 Initialization complete. Launching workers. 00:32:24.560 ======================================================== 00:32:24.560 Latency(us) 00:32:24.560 Device Information : IOPS MiB/s Average min max 00:32:24.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15845.41 7.74 8082.86 1423.07 19190.70 00:32:24.560 ======================================================== 00:32:24.560 Total : 15845.41 7.74 8082.86 1423.07 19190.70 00:32:24.560 00:32:24.560 05:25:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:24.560 05:25:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:36.790 Initializing NVMe Controllers 00:32:36.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:36.790 Controller IO queue size 128, less than required. 00:32:36.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:36.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:36.790 Initialization complete. Launching workers. 00:32:36.790 ======================================================== 00:32:36.790 Latency(us) 00:32:36.790 Device Information : IOPS MiB/s Average min max 00:32:36.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1178.80 147.35 108753.40 16032.95 222563.42 00:32:36.790 ======================================================== 00:32:36.790 Total : 1178.80 147.35 108753.40 16032.95 222563.42 00:32:36.790 00:32:36.790 05:25:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.790 05:25:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87674570-19bd-4c0d-931a-d0447b6fc6f9 00:32:36.790 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:36.790 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4a2d8ff6-25e7-4fb7-83c6-360f7a58d1fb 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.049 05:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.049 rmmod nvme_tcp 00:32:37.049 rmmod nvme_fabrics 00:32:37.049 rmmod nvme_keyring 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1721023 ']' 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1721023 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1721023 ']' 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1721023 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721023 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721023' 00:32:37.308 killing process with pid 1721023 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1721023 00:32:37.308 05:25:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1721023 00:32:39.847 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.847 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.847 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.847 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.848 05:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.762 00:32:41.762 real 1m34.563s 00:32:41.762 user 5m32.693s 00:32:41.762 sys 0m16.306s 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:41.762 ************************************ 00:32:41.762 END TEST nvmf_perf 00:32:41.762 ************************************ 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.762 ************************************ 00:32:41.762 START TEST nvmf_fio_host 00:32:41.762 ************************************ 00:32:41.762 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:42.022 * Looking for test storage... 00:32:42.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.022 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.023 --rc genhtml_branch_coverage=1 00:32:42.023 --rc genhtml_function_coverage=1 00:32:42.023 --rc genhtml_legend=1 00:32:42.023 --rc geninfo_all_blocks=1 00:32:42.023 --rc geninfo_unexecuted_blocks=1 00:32:42.023 00:32:42.023 ' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.023 --rc genhtml_branch_coverage=1 00:32:42.023 --rc genhtml_function_coverage=1 00:32:42.023 --rc genhtml_legend=1 00:32:42.023 --rc geninfo_all_blocks=1 00:32:42.023 --rc geninfo_unexecuted_blocks=1 00:32:42.023 00:32:42.023 ' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.023 --rc genhtml_branch_coverage=1 00:32:42.023 --rc genhtml_function_coverage=1 00:32:42.023 --rc genhtml_legend=1 00:32:42.023 --rc geninfo_all_blocks=1 00:32:42.023 --rc geninfo_unexecuted_blocks=1 00:32:42.023 00:32:42.023 ' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.023 --rc genhtml_branch_coverage=1 00:32:42.023 --rc genhtml_function_coverage=1 00:32:42.023 --rc genhtml_legend=1 00:32:42.023 --rc geninfo_all_blocks=1 00:32:42.023 --rc geninfo_unexecuted_blocks=1 00:32:42.023 00:32:42.023 ' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.023 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:42.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:42.024 05:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:50.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:50.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:50.163 Found net devices under 0000:31:00.0: cvl_0_0 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.163 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:50.164 Found net devices under 0000:31:00.1: cvl_0_1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:32:50.164 00:32:50.164 --- 10.0.0.2 ping statistics --- 00:32:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.164 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:32:50.164 00:32:50.164 --- 10.0.0.1 ping statistics --- 00:32:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.164 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1741148 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1741148 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1741148 ']' 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.164 05:26:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.164 [2024-12-09 05:26:03.635393] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:50.164 [2024-12-09 05:26:03.635528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.164 [2024-12-09 05:26:03.798532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.164 [2024-12-09 05:26:03.925684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.164 [2024-12-09 05:26:03.925752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.164 [2024-12-09 05:26:03.925765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.164 [2024-12-09 05:26:03.925778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.164 [2024-12-09 05:26:03.925788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.164 [2024-12-09 05:26:03.928775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.164 [2024-12-09 05:26:03.928922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.164 [2024-12-09 05:26:03.928994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.164 [2024-12-09 05:26:03.928966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.738 [2024-12-09 05:26:04.590114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.738 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:51.000 Malloc1 00:32:51.000 05:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.278 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:51.539 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.539 [2024-12-09 05:26:05.512698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:51.800 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:51.801 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:51.801 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:51.801 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:51.801 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:51.801 05:26:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.400 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:52.400 fio-3.35 00:32:52.400 Starting 1 thread 00:32:54.989 00:32:54.989 test: (groupid=0, jobs=1): err= 0: pid=1741702: Mon Dec 9 05:26:08 2024 00:32:54.989 read: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(96.4MiB/2005msec) 00:32:54.989 slat (usec): min=2, max=320, avg= 2.31, stdev= 2.81 00:32:54.989 clat (usec): min=3822, max=10289, avg=5716.81, stdev=427.16 00:32:54.989 lat (usec): min=3824, max=10292, avg=5719.13, stdev=427.36 00:32:54.989 clat percentiles (usec): 00:32:54.989 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:32:54.989 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:32:54.990 | 70.00th=[ 5932], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6325], 00:32:54.990 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 8979], 99.95th=[ 9372], 00:32:54.990 | 99.99th=[ 9896] 00:32:54.990 bw ( KiB/s): min=47920, max=49976, per=99.99%, avg=49248.00, stdev=925.63, samples=4 00:32:54.990 iops : min=11980, max=12494, avg=12312.00, stdev=231.41, samples=4 00:32:54.990 write: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(96.2MiB/2005msec); 0 zone resets 00:32:54.990 slat (usec): min=2, max=338, avg= 2.40, stdev= 2.35 00:32:54.990 clat (usec): min=3173, max=9302, avg=4632.46, stdev=360.43 00:32:54.990 lat (usec): min=3175, max=9304, avg=4634.85, stdev=360.73 00:32:54.990 clat percentiles (usec): 00:32:54.990 | 1.00th=[ 3851], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4359], 00:32:54.990 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:32:54.990 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:32:54.990 | 99.00th=[ 5473], 99.50th=[ 6128], 99.90th=[ 7504], 99.95th=[ 7963], 00:32:54.990 | 99.99th=[ 9241] 00:32:54.990 bw ( KiB/s): min=48568, max=49808, per=99.99%, avg=49122.00, stdev=512.60, samples=4 00:32:54.990 iops : min=12142, max=12452, avg=12280.50, stdev=128.15, samples=4 00:32:54.990 lat (msec) : 4=1.27%, 10=98.72%, 20=0.01% 00:32:54.990 cpu : usr=75.15%, sys=23.50%, ctx=32, majf=0, minf=1538 00:32:54.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:54.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:54.990 issued rwts: total=24688,24624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:54.990 00:32:54.990 Run status group 0 (all jobs): 00:32:54.990 READ: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=96.4MiB (101MB), run=2005-2005msec 00:32:54.990 WRITE: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=96.2MiB (101MB), run=2005-2005msec 00:32:54.990 ----------------------------------------------------- 00:32:54.990 Suppressions used: 00:32:54.990 count bytes template 00:32:54.990 1 57 /usr/src/fio/parse.c 00:32:54.990 1 8 libtcmalloc_minimal.so 00:32:54.990 ----------------------------------------------------- 00:32:54.990 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:54.990 05:26:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:55.656 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:55.656 fio-3.35 00:32:55.656 Starting 1 thread 00:32:58.221 00:32:58.221 test: (groupid=0, jobs=1): err= 0: pid=1742511: Mon Dec 9 05:26:11 2024 00:32:58.221 read: IOPS=8943, BW=140MiB/s (147MB/s)(280MiB/2002msec) 00:32:58.221 slat (usec): min=3, max=119, avg= 3.88, stdev= 1.75 00:32:58.221 clat (usec): min=2112, max=15885, avg=8719.19, stdev=1984.57 00:32:58.221 lat (usec): min=2116, max=15889, avg=8723.07, stdev=1984.71 00:32:58.221 clat percentiles (usec): 00:32:58.221 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6915], 00:32:58.221 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:32:58.221 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11338], 95.00th=[12125], 00:32:58.221 | 99.00th=[13042], 99.50th=[13566], 99.90th=[15008], 99.95th=[15270], 00:32:58.221 | 99.99th=[15795] 00:32:58.221 bw ( KiB/s): min=65376, max=76608, per=49.14%, avg=70320.00, stdev=5002.14, samples=4 00:32:58.221 iops : min= 4086, max= 4788, avg=4395.00, stdev=312.63, samples=4 00:32:58.221 write: IOPS=5170, BW=80.8MiB/s (84.7MB/s)(144MiB/1778msec); 0 zone resets 00:32:58.221 slat (usec): min=40, max=390, avg=41.70, stdev= 7.63 00:32:58.221 clat (usec): min=2594, max=15262, avg=9740.39, stdev=1383.00 00:32:58.221 lat (usec): min=2635, max=15370, avg=9782.09, stdev=1384.63 00:32:58.221 clat percentiles (usec): 00:32:58.221 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8586], 00:32:58.221 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:32:58.221 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[12125], 00:32:58.221 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14877], 99.95th=[15139], 00:32:58.221 | 99.99th=[15270] 00:32:58.221 bw ( KiB/s): min=68032, max=79872, per=88.26%, avg=73016.00, stdev=5199.15, samples=4 00:32:58.221 iops : min= 4252, max= 4992, avg=4563.50, stdev=324.95, samples=4 00:32:58.221 lat (msec) : 4=0.31%, 10=67.76%, 20=31.93% 00:32:58.221 cpu : usr=86.61%, sys=12.04%, ctx=13, majf=0, minf=2364 00:32:58.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:58.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.221 issued rwts: total=17905,9193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.221 00:32:58.221 Run status group 0 (all jobs): 00:32:58.221 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=280MiB (293MB), run=2002-2002msec 00:32:58.221 WRITE: bw=80.8MiB/s (84.7MB/s), 80.8MiB/s-80.8MiB/s (84.7MB/s-84.7MB/s), io=144MiB (151MB), run=1778-1778msec 00:32:58.221 ----------------------------------------------------- 00:32:58.221 Suppressions used: 00:32:58.221 count bytes template 00:32:58.221 1 57 /usr/src/fio/parse.c 00:32:58.221 139 13344 /usr/src/fio/iolog.c 00:32:58.221 1 8 libtcmalloc_minimal.so 00:32:58.221 ----------------------------------------------------- 00:32:58.221 00:32:58.221 05:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:58.221 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:58.481 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:58.481 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:32:58.481 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:32:58.741 Nvme0n1 00:32:59.001 05:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=08e7842d-17d7-4a49-a875-a48cf8998a0e 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 08e7842d-17d7-4a49-a875-a48cf8998a0e 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=08e7842d-17d7-4a49-a875-a48cf8998a0e 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:59.571 { 00:32:59.571 "uuid": "08e7842d-17d7-4a49-a875-a48cf8998a0e", 00:32:59.571 "name": "lvs_0", 00:32:59.571 "base_bdev": "Nvme0n1", 00:32:59.571 "total_data_clusters": 1787, 00:32:59.571 "free_clusters": 1787, 00:32:59.571 "block_size": 512, 00:32:59.571 "cluster_size": 1073741824 00:32:59.571 } 00:32:59.571 ]' 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="08e7842d-17d7-4a49-a875-a48cf8998a0e") .free_clusters' 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1787 00:32:59.571 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="08e7842d-17d7-4a49-a875-a48cf8998a0e") .cluster_size' 00:32:59.831 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:59.831 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1829888 00:32:59.831 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1829888 00:32:59.831 1829888 00:32:59.831 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:32:59.831 ad80137e-0e88-402f-9712-d7c14d2e0219 00:32:59.831 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:00.092 05:26:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:00.353 05:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:00.950 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:00.950 fio-3.35 00:33:00.950 Starting 1 thread 00:33:03.499 00:33:03.499 test: (groupid=0, jobs=1): err= 0: pid=1743705: Mon Dec 9 05:26:17 2024 00:33:03.499 read: IOPS=9227, BW=36.0MiB/s (37.8MB/s)(72.3MiB/2006msec) 00:33:03.499 slat (usec): min=2, max=123, avg= 2.35, stdev= 1.29 00:33:03.499 clat (usec): min=2676, max=13073, avg=7644.76, stdev=583.12 00:33:03.499 lat (usec): min=2698, max=13075, avg=7647.11, stdev=583.03 00:33:03.499 clat percentiles (usec): 00:33:03.499 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:33:03.499 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:33:03.499 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:33:03.499 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[10683], 99.95th=[11863], 00:33:03.499 | 99.99th=[12649] 00:33:03.499 bw ( KiB/s): min=35696, max=37472, per=99.93%, avg=36882.00, stdev=804.95, samples=4 00:33:03.499 iops : min= 8924, max= 9368, avg=9220.50, stdev=201.24, samples=4 00:33:03.499 write: IOPS=9233, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec); 0 zone resets 00:33:03.499 slat (nsec): min=2237, max=103033, avg=2438.00, stdev=831.02 00:33:03.499 clat (usec): min=1367, max=11456, avg=6117.06, stdev=496.95 00:33:03.499 lat (usec): min=1376, max=11458, avg=6119.50, stdev=496.91 00:33:03.499 clat percentiles (usec): 00:33:03.499 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:33:03.499 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:33:03.500 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:33:03.500 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[10552], 00:33:03.500 | 99.99th=[11338] 00:33:03.500 bw ( KiB/s): min=36480, max=37312, per=99.99%, avg=36928.00, stdev=365.79, samples=4 00:33:03.500 iops : min= 9120, max= 9328, avg=9232.00, stdev=91.45, samples=4 00:33:03.500 lat (msec) : 2=0.02%, 4=0.09%, 10=99.78%, 20=0.10% 00:33:03.500 cpu : usr=76.61%, sys=22.39%, ctx=47, majf=0, minf=1535 00:33:03.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:03.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.500 issued rwts: total=18510,18522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.500 00:33:03.500 Run status group 0 (all jobs): 00:33:03.500 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.3MiB (75.8MB), run=2006-2006msec 00:33:03.500 WRITE: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:33:03.500 ----------------------------------------------------- 00:33:03.500 Suppressions used: 00:33:03.500 count bytes template 00:33:03.500 1 58 /usr/src/fio/parse.c 00:33:03.500 1 8 libtcmalloc_minimal.so 00:33:03.500 ----------------------------------------------------- 00:33:03.500 00:33:03.500 05:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:03.761 05:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9f180843-65fe-4662-9371-fed0ca65da56 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9f180843-65fe-4662-9371-fed0ca65da56 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9f180843-65fe-4662-9371-fed0ca65da56 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:04.699 { 00:33:04.699 "uuid": "08e7842d-17d7-4a49-a875-a48cf8998a0e", 00:33:04.699 "name": "lvs_0", 00:33:04.699 "base_bdev": "Nvme0n1", 00:33:04.699 "total_data_clusters": 1787, 00:33:04.699 "free_clusters": 0, 00:33:04.699 "block_size": 512, 00:33:04.699 "cluster_size": 1073741824 00:33:04.699 }, 00:33:04.699 { 00:33:04.699 "uuid": "9f180843-65fe-4662-9371-fed0ca65da56", 00:33:04.699 "name": "lvs_n_0", 00:33:04.699 "base_bdev": "ad80137e-0e88-402f-9712-d7c14d2e0219", 00:33:04.699 "total_data_clusters": 457025, 00:33:04.699 "free_clusters": 457025, 00:33:04.699 "block_size": 512, 00:33:04.699 "cluster_size": 4194304 00:33:04.699 } 00:33:04.699 ]' 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9f180843-65fe-4662-9371-fed0ca65da56") .free_clusters' 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=457025 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9f180843-65fe-4662-9371-fed0ca65da56") .cluster_size' 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1828100 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1828100 00:33:04.699 1828100 00:33:04.699 05:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:06.617 f91fbb13-19d2-4543-be86-2f8a9fd9714c 00:33:06.617 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:06.617 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:06.875 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:07.135 05:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:07.395 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:07.395 fio-3.35 00:33:07.395 Starting 1 thread 00:33:10.692 00:33:10.692 test: (groupid=0, jobs=1): err= 0: pid=1745222: Mon Dec 9 05:26:23 2024 00:33:10.692 read: IOPS=7916, BW=30.9MiB/s (32.4MB/s)(62.1MiB/2007msec) 00:33:10.692 slat (usec): min=2, max=120, avg= 2.35, stdev= 1.28 00:33:10.692 clat (usec): min=3130, max=14807, avg=8892.80, stdev=720.47 00:33:10.692 lat (usec): min=3153, max=14809, avg=8895.15, stdev=720.38 00:33:10.692 clat percentiles (usec): 00:33:10.692 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8356], 00:33:10.692 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:33:10.692 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:33:10.692 | 99.00th=[10421], 99.50th=[10683], 99.90th=[12911], 99.95th=[13698], 00:33:10.692 | 99.99th=[13829] 00:33:10.692 bw ( KiB/s): min=30224, max=32208, per=99.91%, avg=31638.00, stdev=945.93, samples=4 00:33:10.692 iops : min= 7556, max= 8052, avg=7909.50, stdev=236.48, samples=4 00:33:10.692 write: IOPS=7889, BW=30.8MiB/s (32.3MB/s)(61.9MiB/2007msec); 0 zone resets 00:33:10.692 slat (nsec): min=2226, max=105556, avg=2438.94, stdev=906.94 00:33:10.692 clat (usec): min=1520, max=13342, avg=7181.06, stdev=609.53 00:33:10.692 lat (usec): min=1530, max=13344, avg=7183.50, stdev=609.49 00:33:10.692 clat percentiles (usec): 00:33:10.692 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:33:10.692 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:33:10.692 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8094], 00:33:10.692 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[11863], 00:33:10.692 | 99.99th=[13304] 00:33:10.693 bw ( KiB/s): min=31304, max=31744, per=99.99%, avg=31554.00, stdev=184.80, samples=4 00:33:10.693 iops : min= 7826, max= 7936, avg=7888.50, stdev=46.20, samples=4 00:33:10.693 lat (msec) : 2=0.01%, 4=0.09%, 10=97.37%, 20=2.54% 00:33:10.693 cpu : usr=71.73%, sys=27.32%, ctx=40, majf=0, minf=1534 00:33:10.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:10.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.693 issued rwts: total=15889,15834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.693 00:33:10.693 Run status group 0 (all jobs): 00:33:10.693 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.1MiB (65.1MB), run=2007-2007msec 00:33:10.693 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.9MiB (64.9MB), run=2007-2007msec 00:33:10.693 ----------------------------------------------------- 00:33:10.693 Suppressions used: 00:33:10.693 count bytes template 00:33:10.693 1 58 /usr/src/fio/parse.c 00:33:10.693 1 8 libtcmalloc_minimal.so 00:33:10.693 ----------------------------------------------------- 00:33:10.693 00:33:10.693 05:26:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:10.693 05:26:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:10.693 05:26:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:13.994 05:26:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:13.994 05:26:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:14.253 05:26:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:14.253 05:26:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.788 rmmod nvme_tcp 00:33:16.788 rmmod nvme_fabrics 00:33:16.788 rmmod nvme_keyring 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1741148 ']' 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1741148 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1741148 ']' 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1741148 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741148 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741148' 00:33:16.788 killing process with pid 1741148 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1741148 00:33:16.788 05:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1741148 00:33:17.049 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.049 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.049 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.049 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.050 05:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:19.593 00:33:19.593 real 0m37.433s 00:33:19.593 user 2m53.865s 00:33:19.593 sys 0m12.917s 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.593 ************************************ 00:33:19.593 END TEST nvmf_fio_host 00:33:19.593 ************************************ 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.593 ************************************ 00:33:19.593 START TEST nvmf_failover 00:33:19.593 ************************************ 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:19.593 * Looking for test storage... 00:33:19.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.593 --rc genhtml_branch_coverage=1 00:33:19.593 --rc genhtml_function_coverage=1 00:33:19.593 --rc genhtml_legend=1 00:33:19.593 --rc geninfo_all_blocks=1 00:33:19.593 --rc geninfo_unexecuted_blocks=1 00:33:19.593 00:33:19.593 ' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.593 --rc genhtml_branch_coverage=1 00:33:19.593 --rc genhtml_function_coverage=1 00:33:19.593 --rc genhtml_legend=1 00:33:19.593 --rc geninfo_all_blocks=1 00:33:19.593 --rc geninfo_unexecuted_blocks=1 00:33:19.593 00:33:19.593 ' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.593 --rc genhtml_branch_coverage=1 00:33:19.593 --rc genhtml_function_coverage=1 00:33:19.593 --rc genhtml_legend=1 00:33:19.593 --rc geninfo_all_blocks=1 00:33:19.593 --rc geninfo_unexecuted_blocks=1 00:33:19.593 00:33:19.593 ' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.593 --rc genhtml_branch_coverage=1 00:33:19.593 --rc genhtml_function_coverage=1 00:33:19.593 --rc genhtml_legend=1 00:33:19.593 --rc geninfo_all_blocks=1 00:33:19.593 --rc geninfo_unexecuted_blocks=1 00:33:19.593 00:33:19.593 ' 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:19.593 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:19.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.594 05:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:27.729 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:27.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:27.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:27.730 Found net devices under 0000:31:00.0: cvl_0_0 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:27.730 Found net devices under 0000:31:00.1: cvl_0_1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:27.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:33:27.730 00:33:27.730 --- 10.0.0.2 ping statistics --- 00:33:27.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.730 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:33:27.730 00:33:27.730 --- 10.0.0.1 ping statistics --- 00:33:27.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.730 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.730 05:26:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1751077 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1751077 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1751077 ']' 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.730 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.731 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.731 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.731 [2024-12-09 05:26:41.104087] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:27.731 [2024-12-09 05:26:41.104208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.731 [2024-12-09 05:26:41.271968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:27.731 [2024-12-09 05:26:41.398659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.731 [2024-12-09 05:26:41.398729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.731 [2024-12-09 05:26:41.398743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.731 [2024-12-09 05:26:41.398761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.731 [2024-12-09 05:26:41.398771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.731 [2024-12-09 05:26:41.401517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.731 [2024-12-09 05:26:41.401625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.731 [2024-12-09 05:26:41.401651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.992 05:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:28.252 [2024-12-09 05:26:42.107914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.252 05:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:28.512 Malloc0 00:33:28.512 05:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:28.773 05:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:29.034 05:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.034 [2024-12-09 05:26:42.976999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.034 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:29.296 [2024-12-09 05:26:43.177509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:29.296 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:29.556 [2024-12-09 05:26:43.374119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1751618 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1751618 /var/tmp/bdevperf.sock 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1751618 ']' 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:29.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:29.556 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.557 05:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:30.500 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.500 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:30.500 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:30.761 NVMe0n1 00:33:30.761 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:31.023 00:33:31.023 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1751945 00:33:31.023 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:31.023 05:26:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:32.412 05:26:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.412 [2024-12-09 05:26:46.152419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.412 [2024-12-09 05:26:46.152569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.152998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.413 [2024-12-09 05:26:46.153081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:32.414 05:26:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:35.707 05:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.707 00:33:35.708 05:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:35.968 [2024-12-09 05:26:49.737556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 [2024-12-09 05:26:49.737637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:35.968 05:26:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:39.264 05:26:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.264 [2024-12-09 05:26:52.931810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.264 05:26:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:40.208 05:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:40.208 [2024-12-09 05:26:54.124309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 [2024-12-09 05:26:54.124472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:40.208 05:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1751945 00:33:46.796 { 00:33:46.796 "results": [ 00:33:46.796 { 00:33:46.796 "job": "NVMe0n1", 00:33:46.796 "core_mask": "0x1", 00:33:46.796 "workload": "verify", 00:33:46.796 "status": "finished", 00:33:46.796 "verify_range": { 00:33:46.796 "start": 0, 00:33:46.796 "length": 16384 00:33:46.796 }, 00:33:46.796 "queue_depth": 128, 00:33:46.796 "io_size": 4096, 00:33:46.796 "runtime": 15.003604, 00:33:46.796 "iops": 11154.120036759168, 00:33:46.796 "mibps": 43.5707813935905, 00:33:46.796 "io_failed": 9029, 00:33:46.797 "io_timeout": 0, 00:33:46.797 "avg_latency_us": 10865.363263390047, 00:33:46.797 "min_latency_us": 607.5733333333334, 00:33:46.797 "max_latency_us": 19442.346666666668 00:33:46.797 } 00:33:46.797 ], 00:33:46.797 "core_count": 1 00:33:46.797 } 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1751618 ']' 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1751618' 00:33:46.797 killing process with pid 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1751618 00:33:46.797 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:46.797 [2024-12-09 05:26:43.485850] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:46.797 [2024-12-09 05:26:43.485965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751618 ] 00:33:46.797 [2024-12-09 05:26:43.628242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.797 [2024-12-09 05:26:43.725722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.797 Running I/O for 15 seconds... 00:33:46.797 9796.00 IOPS, 38.27 MiB/s [2024-12-09T04:27:00.794Z] [2024-12-09 05:26:46.155324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.155979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.155992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.156015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.156025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.156037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.797 [2024-12-09 05:26:46.156048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.797 [2024-12-09 05:26:46.156061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.798 [2024-12-09 05:26:46.156876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.798 [2024-12-09 05:26:46.156980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.798 [2024-12-09 05:26:46.156991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.799 [2024-12-09 05:26:46.157907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.799 [2024-12-09 05:26:46.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.157929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.157940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.157953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.157963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.157975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:46.158343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.800 [2024-12-09 05:26:46.158381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.800 [2024-12-09 05:26:46.158392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85112 len:8 PRP1 0x0 PRP2 0x0 00:33:46.800 [2024-12-09 05:26:46.158405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158614] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:46.800 [2024-12-09 05:26:46.158652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.800 [2024-12-09 05:26:46.158665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.800 [2024-12-09 05:26:46.158689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.800 [2024-12-09 05:26:46.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.800 [2024-12-09 05:26:46.158732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:46.158742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:46.800 [2024-12-09 05:26:46.158802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393300 (9): Bad file descriptor 00:33:46.800 [2024-12-09 05:26:46.162535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:46.800 [2024-12-09 05:26:46.200264] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:46.800 9811.50 IOPS, 38.33 MiB/s [2024-12-09T04:27:00.797Z] 9969.33 IOPS, 38.94 MiB/s [2024-12-09T04:27:00.797Z] 10319.75 IOPS, 40.31 MiB/s [2024-12-09T04:27:00.797Z] [2024-12-09 05:26:49.738552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.800 [2024-12-09 05:26:49.738828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.800 [2024-12-09 05:26:49.738835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.738992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.801 [2024-12-09 05:26:49.739321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.801 [2024-12-09 05:26:49.739337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.801 [2024-12-09 05:26:49.739353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.801 [2024-12-09 05:26:49.739369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.801 [2024-12-09 05:26:49.739378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.802 [2024-12-09 05:26:49.739385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.802 [2024-12-09 05:26:49.739402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.802 [2024-12-09 05:26:49.739418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.802 [2024-12-09 05:26:49.739434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.739992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.739999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.740007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.802 [2024-12-09 05:26:49.740014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.802 [2024-12-09 05:26:49.740022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.803 [2024-12-09 05:26:49.740208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8376 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8392 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8400 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8408 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8424 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8432 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8440 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8456 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8464 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8472 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8488 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7528 len:8 PRP1 0x0 PRP2 0x0 00:33:46.803 [2024-12-09 05:26:49.740654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.803 [2024-12-09 05:26:49.740660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.803 [2024-12-09 05:26:49.740666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.803 [2024-12-09 05:26:49.740672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7536 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7544 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7560 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7568 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7576 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7592 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7600 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7608 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7624 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.740979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.740984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.740991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7632 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.740998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.741010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7640 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.741023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.804 [2024-12-09 05:26:49.741035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.804 [2024-12-09 05:26:49.741040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:8 PRP1 0x0 PRP2 0x0 00:33:46.804 [2024-12-09 05:26:49.741048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741198] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:46.804 [2024-12-09 05:26:49.741224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.804 [2024-12-09 05:26:49.741233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.804 [2024-12-09 05:26:49.741249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.804 [2024-12-09 05:26:49.741264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.741274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.804 [2024-12-09 05:26:49.741281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:49.751188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:46.804 [2024-12-09 05:26:49.751267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393300 (9): Bad file descriptor 00:33:46.804 [2024-12-09 05:26:49.754701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:46.804 [2024-12-09 05:26:49.784501] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:46.804 10428.60 IOPS, 40.74 MiB/s [2024-12-09T04:27:00.801Z] 10600.67 IOPS, 41.41 MiB/s [2024-12-09T04:27:00.801Z] 10742.14 IOPS, 41.96 MiB/s [2024-12-09T04:27:00.801Z] 10833.62 IOPS, 42.32 MiB/s [2024-12-09T04:27:00.801Z] 10920.78 IOPS, 42.66 MiB/s [2024-12-09T04:27:00.801Z] [2024-12-09 05:26:54.125088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.804 [2024-12-09 05:26:54.125286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.804 [2024-12-09 05:26:54.125294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.805 [2024-12-09 05:26:54.125544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.805 [2024-12-09 05:26:54.125892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.805 [2024-12-09 05:26:54.125901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.125921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.125936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.125952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.125969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.125987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.125996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.806 [2024-12-09 05:26:54.126530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.806 [2024-12-09 05:26:54.126539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.807 [2024-12-09 05:26:54.126772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27192 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27200 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27208 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27216 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27224 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27232 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.126977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27240 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.126985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.126992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.126998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27248 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27256 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27264 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27272 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27280 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27288 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27296 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.807 [2024-12-09 05:26:54.127173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.807 [2024-12-09 05:26:54.127179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.807 [2024-12-09 05:26:54.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27304 len:8 PRP1 0x0 PRP2 0x0 00:33:46.807 [2024-12-09 05:26:54.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27312 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27320 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27328 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27336 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27344 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27352 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27360 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27376 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27384 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27392 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.127509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27400 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.127516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.127524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.127529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.138009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27408 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.138041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.138065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.138072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26584 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.138082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:46.808 [2024-12-09 05:26:54.138096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:46.808 [2024-12-09 05:26:54.138103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26592 len:8 PRP1 0x0 PRP2 0x0 00:33:46.808 [2024-12-09 05:26:54.138111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138268] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:46.808 [2024-12-09 05:26:54.138303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.808 [2024-12-09 05:26:54.138313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.808 [2024-12-09 05:26:54.138332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.808 [2024-12-09 05:26:54.138348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.808 [2024-12-09 05:26:54.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.808 [2024-12-09 05:26:54.138375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:46.808 [2024-12-09 05:26:54.138422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393300 (9): Bad file descriptor 00:33:46.808 [2024-12-09 05:26:54.141703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:46.808 [2024-12-09 05:26:54.256672] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:46.808 10850.30 IOPS, 42.38 MiB/s [2024-12-09T04:27:00.805Z] 10936.27 IOPS, 42.72 MiB/s [2024-12-09T04:27:00.805Z] 11003.92 IOPS, 42.98 MiB/s [2024-12-09T04:27:00.805Z] 11060.08 IOPS, 43.20 MiB/s [2024-12-09T04:27:00.805Z] 11112.14 IOPS, 43.41 MiB/s 00:33:46.808 Latency(us) 00:33:46.808 [2024-12-09T04:27:00.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.808 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:46.808 Verification LBA range: start 0x0 length 0x4000 00:33:46.808 NVMe0n1 : 15.00 11154.12 43.57 601.79 0.00 10865.36 607.57 19442.35 00:33:46.808 [2024-12-09T04:27:00.805Z] =================================================================================================================== 00:33:46.808 [2024-12-09T04:27:00.805Z] Total : 11154.12 43.57 601.79 0.00 10865.36 607.57 19442.35 00:33:46.808 Received shutdown signal, test time was about 15.000000 seconds 00:33:46.808 00:33:46.808 Latency(us) 00:33:46.808 [2024-12-09T04:27:00.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.808 [2024-12-09T04:27:00.805Z] =================================================================================================================== 00:33:46.808 [2024-12-09T04:27:00.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.808 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:46.808 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:46.808 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:46.808 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1754814 00:33:46.808 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1754814 /var/tmp/bdevperf.sock 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1754814 ']' 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:46.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.809 05:27:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:47.750 05:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.750 05:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:47.750 05:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:47.750 [2024-12-09 05:27:01.673016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:47.750 05:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:48.011 [2024-12-09 05:27:01.845431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:48.011 05:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:48.272 NVMe0n1 00:33:48.272 05:27:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:48.843 00:33:48.843 05:27:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:49.103 00:33:49.103 05:27:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:49.103 05:27:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:49.103 05:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:49.363 05:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:52.662 05:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:52.662 05:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:52.662 05:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1756087 00:33:52.662 05:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:52.662 05:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1756087 00:33:53.601 { 00:33:53.601 "results": [ 00:33:53.601 { 00:33:53.601 "job": "NVMe0n1", 00:33:53.601 "core_mask": "0x1", 00:33:53.601 "workload": "verify", 00:33:53.601 "status": "finished", 00:33:53.601 "verify_range": { 00:33:53.601 "start": 0, 00:33:53.601 "length": 16384 00:33:53.601 }, 00:33:53.601 "queue_depth": 128, 00:33:53.601 "io_size": 4096, 00:33:53.601 "runtime": 1.007101, 00:33:53.601 "iops": 11518.209196495683, 00:33:53.601 "mibps": 44.993004673811264, 00:33:53.601 "io_failed": 0, 00:33:53.601 "io_timeout": 0, 00:33:53.601 "avg_latency_us": 11054.314225287357, 00:33:53.601 "min_latency_us": 2252.8, 00:33:53.601 "max_latency_us": 9611.946666666667 00:33:53.601 } 00:33:53.601 ], 00:33:53.601 "core_count": 1 00:33:53.601 } 00:33:53.601 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:53.601 [2024-12-09 05:27:00.755184] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:53.601 [2024-12-09 05:27:00.755296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754814 ] 00:33:53.601 [2024-12-09 05:27:00.886005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.601 [2024-12-09 05:27:00.961830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.601 [2024-12-09 05:27:03.231638] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:53.601 [2024-12-09 05:27:03.231699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.601 [2024-12-09 05:27:03.231712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.601 [2024-12-09 05:27:03.231724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.601 [2024-12-09 05:27:03.231733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.601 [2024-12-09 05:27:03.231740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.601 [2024-12-09 05:27:03.231748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.601 [2024-12-09 05:27:03.231755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.601 [2024-12-09 05:27:03.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.601 [2024-12-09 05:27:03.231773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:53.601 [2024-12-09 05:27:03.231814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:53.601 [2024-12-09 05:27:03.231845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393300 (9): Bad file descriptor 00:33:53.601 [2024-12-09 05:27:03.285151] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:53.601 Running I/O for 1 seconds... 00:33:53.601 11434.00 IOPS, 44.66 MiB/s 00:33:53.601 Latency(us) 00:33:53.601 [2024-12-09T04:27:07.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:53.601 Verification LBA range: start 0x0 length 0x4000 00:33:53.601 NVMe0n1 : 1.01 11518.21 44.99 0.00 0.00 11054.31 2252.80 9611.95 00:33:53.601 [2024-12-09T04:27:07.598Z] =================================================================================================================== 00:33:53.601 [2024-12-09T04:27:07.598Z] Total : 11518.21 44.99 0.00 0.00 11054.31 2252.80 9611.95 00:33:53.601 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:53.601 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:53.862 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.121 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:54.121 05:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:54.381 05:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.381 05:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1754814 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1754814 ']' 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1754814 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1754814 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1754814' 00:33:57.671 killing process with pid 1754814 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1754814 00:33:57.671 05:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1754814 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:58.241 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:58.241 rmmod nvme_tcp 00:33:58.501 rmmod nvme_fabrics 00:33:58.501 rmmod nvme_keyring 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1751077 ']' 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1751077 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1751077 ']' 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1751077 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1751077 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1751077' 00:33:58.501 killing process with pid 1751077 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1751077 00:33:58.501 05:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1751077 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.073 05:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.619 00:34:01.619 real 0m41.909s 00:34:01.619 user 2m8.044s 00:34:01.619 sys 0m9.246s 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 ************************************ 00:34:01.619 END TEST nvmf_failover 00:34:01.619 ************************************ 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.619 05:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 ************************************ 00:34:01.619 START TEST nvmf_host_discovery 00:34:01.620 ************************************ 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:01.620 * Looking for test storage... 00:34:01.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.620 --rc genhtml_branch_coverage=1 00:34:01.620 --rc genhtml_function_coverage=1 00:34:01.620 --rc genhtml_legend=1 00:34:01.620 --rc geninfo_all_blocks=1 00:34:01.620 --rc geninfo_unexecuted_blocks=1 00:34:01.620 00:34:01.620 ' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.620 --rc genhtml_branch_coverage=1 00:34:01.620 --rc genhtml_function_coverage=1 00:34:01.620 --rc genhtml_legend=1 00:34:01.620 --rc geninfo_all_blocks=1 00:34:01.620 --rc geninfo_unexecuted_blocks=1 00:34:01.620 00:34:01.620 ' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.620 --rc genhtml_branch_coverage=1 00:34:01.620 --rc genhtml_function_coverage=1 00:34:01.620 --rc genhtml_legend=1 00:34:01.620 --rc geninfo_all_blocks=1 00:34:01.620 --rc geninfo_unexecuted_blocks=1 00:34:01.620 00:34:01.620 ' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.620 --rc genhtml_branch_coverage=1 00:34:01.620 --rc genhtml_function_coverage=1 00:34:01.620 --rc genhtml_legend=1 00:34:01.620 --rc geninfo_all_blocks=1 00:34:01.620 --rc geninfo_unexecuted_blocks=1 00:34:01.620 00:34:01.620 ' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.620 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:01.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.621 05:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:09.757 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:09.758 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:09.758 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:09.758 Found net devices under 0000:31:00.0: cvl_0_0 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:09.758 Found net devices under 0000:31:00.1: cvl_0_1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:34:09.758 00:34:09.758 --- 10.0.0.2 ping statistics --- 00:34:09.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.758 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:09.758 00:34:09.758 --- 10.0.0.1 ping statistics --- 00:34:09.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.758 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:09.758 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1761928 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1761928 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1761928 ']' 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.759 05:27:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.759 [2024-12-09 05:27:23.100653] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:09.759 [2024-12-09 05:27:23.100778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.759 [2024-12-09 05:27:23.253377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.759 [2024-12-09 05:27:23.373599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.759 [2024-12-09 05:27:23.373666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.759 [2024-12-09 05:27:23.373679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.759 [2024-12-09 05:27:23.373692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.759 [2024-12-09 05:27:23.373710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.759 [2024-12-09 05:27:23.375188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 [2024-12-09 05:27:23.934208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 [2024-12-09 05:27:23.946533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 null0 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 null1 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1762016 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1762016 /tmp/host.sock 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1762016 ']' 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.020 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:10.020 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:10.021 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.021 05:27:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.282 [2024-12-09 05:27:24.083366] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:10.282 [2024-12-09 05:27:24.083500] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762016 ] 00:34:10.282 [2024-12-09 05:27:24.235846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.542 [2024-12-09 05:27:24.341307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.114 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.115 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 [2024-12-09 05:27:25.213519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.375 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:11.634 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:11.635 05:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:12.203 [2024-12-09 05:27:25.924048] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:12.203 [2024-12-09 05:27:25.924082] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:12.203 [2024-12-09 05:27:25.924112] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.203 [2024-12-09 05:27:26.010389] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:12.203 [2024-12-09 05:27:26.072238] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:12.203 [2024-12-09 05:27:26.073607] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000394200:1 started. 00:34:12.203 [2024-12-09 05:27:26.075611] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:12.203 [2024-12-09 05:27:26.075636] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:12.203 [2024-12-09 05:27:26.082861] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000394200 was disconnected and freed. delete nvme_qpair. 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:12.463 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:12.724 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:12.725 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.725 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.725 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:12.725 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.725 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:12.985 [2024-12-09 05:27:26.854111] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000394700:1 started. 00:34:12.986 [2024-12-09 05:27:26.864685] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000394700 was disconnected and freed. delete nvme_qpair. 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.986 [2024-12-09 05:27:26.946090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:12.986 [2024-12-09 05:27:26.946813] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:12.986 [2024-12-09 05:27:26.946850] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:12.986 05:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.247 [2024-12-09 05:27:27.034688] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.247 [2024-12-09 05:27:27.098744] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:13.247 [2024-12-09 05:27:27.098806] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:13.247 [2024-12-09 05:27:27.098828] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:13.247 [2024-12-09 05:27:27.098841] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:13.247 05:27:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.190 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.452 [2024-12-09 05:27:28.217877] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:14.452 [2024-12-09 05:27:28.217902] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:14.452 [2024-12-09 05:27:28.219525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.452 [2024-12-09 05:27:28.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.452 [2024-12-09 05:27:28.219563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.452 [2024-12-09 05:27:28.219571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.452 [2024-12-09 05:27:28.219579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.452 [2024-12-09 05:27:28.219587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.452 [2024-12-09 05:27:28.219595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.452 [2024-12-09 05:27:28.219603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.452 [2024-12-09 05:27:28.219610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.452 [2024-12-09 05:27:28.229537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:14.452 [2024-12-09 05:27:28.239571] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.452 [2024-12-09 05:27:28.239595] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.452 [2024-12-09 05:27:28.239605] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.452 [2024-12-09 05:27:28.239611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.452 [2024-12-09 05:27:28.239637] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.452 [2024-12-09 05:27:28.239863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-12-09 05:27:28.239896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.452 [2024-12-09 05:27:28.239906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.452 [2024-12-09 05:27:28.239923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.452 [2024-12-09 05:27:28.239947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.452 [2024-12-09 05:27:28.239955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.452 [2024-12-09 05:27:28.239964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.452 [2024-12-09 05:27:28.239972] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.452 [2024-12-09 05:27:28.239979] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.452 [2024-12-09 05:27:28.239984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.452 [2024-12-09 05:27:28.249669] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.452 [2024-12-09 05:27:28.249686] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.452 [2024-12-09 05:27:28.249692] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.452 [2024-12-09 05:27:28.249696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.452 [2024-12-09 05:27:28.249714] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.452 [2024-12-09 05:27:28.250100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-12-09 05:27:28.250114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.452 [2024-12-09 05:27:28.250122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.452 [2024-12-09 05:27:28.250134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.452 [2024-12-09 05:27:28.250152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.452 [2024-12-09 05:27:28.250159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.452 [2024-12-09 05:27:28.250166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.452 [2024-12-09 05:27:28.250172] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.452 [2024-12-09 05:27:28.250185] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.452 [2024-12-09 05:27:28.250190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.452 [2024-12-09 05:27:28.259746] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.452 [2024-12-09 05:27:28.259763] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.452 [2024-12-09 05:27:28.259769] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.452 [2024-12-09 05:27:28.259774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.452 [2024-12-09 05:27:28.259790] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.452 [2024-12-09 05:27:28.260011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-12-09 05:27:28.260026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.452 [2024-12-09 05:27:28.260033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.452 [2024-12-09 05:27:28.260045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.452 [2024-12-09 05:27:28.260055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.452 [2024-12-09 05:27:28.260061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.452 [2024-12-09 05:27:28.260068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.452 [2024-12-09 05:27:28.260074] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.452 [2024-12-09 05:27:28.260079] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.452 [2024-12-09 05:27:28.260084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:14.452 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:14.453 [2024-12-09 05:27:28.269826] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.453 [2024-12-09 05:27:28.269846] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.453 [2024-12-09 05:27:28.269852] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.269861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.453 [2024-12-09 05:27:28.269882] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.270246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-12-09 05:27:28.270260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.453 [2024-12-09 05:27:28.270268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.453 [2024-12-09 05:27:28.270280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.453 [2024-12-09 05:27:28.270297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.453 [2024-12-09 05:27:28.270304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.453 [2024-12-09 05:27:28.270311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.453 [2024-12-09 05:27:28.270317] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.453 [2024-12-09 05:27:28.270322] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.453 [2024-12-09 05:27:28.270327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.453 [2024-12-09 05:27:28.279914] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.453 [2024-12-09 05:27:28.279933] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.453 [2024-12-09 05:27:28.279939] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.279944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.453 [2024-12-09 05:27:28.279960] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.280315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-12-09 05:27:28.280329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.453 [2024-12-09 05:27:28.280337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.453 [2024-12-09 05:27:28.280349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.453 [2024-12-09 05:27:28.280359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.453 [2024-12-09 05:27:28.280366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.453 [2024-12-09 05:27:28.280373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.453 [2024-12-09 05:27:28.280379] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.453 [2024-12-09 05:27:28.280384] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.453 [2024-12-09 05:27:28.280389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.453 [2024-12-09 05:27:28.289994] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.453 [2024-12-09 05:27:28.290013] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.453 [2024-12-09 05:27:28.290021] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.290027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.453 [2024-12-09 05:27:28.290047] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.290290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-12-09 05:27:28.290304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.453 [2024-12-09 05:27:28.290312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.453 [2024-12-09 05:27:28.290324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.453 [2024-12-09 05:27:28.290334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.453 [2024-12-09 05:27:28.290341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.453 [2024-12-09 05:27:28.290348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.453 [2024-12-09 05:27:28.290355] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.453 [2024-12-09 05:27:28.290360] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.453 [2024-12-09 05:27:28.290365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.453 [2024-12-09 05:27:28.300080] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:14.453 [2024-12-09 05:27:28.300099] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:14.453 [2024-12-09 05:27:28.300104] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.300109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:14.453 [2024-12-09 05:27:28.300125] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.453 [2024-12-09 05:27:28.300398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-12-09 05:27:28.300411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393a80 with addr=10.0.0.2, port=4420 00:34:14.453 [2024-12-09 05:27:28.300419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393a80 is same with the state(6) to be set 00:34:14.453 [2024-12-09 05:27:28.300430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393a80 (9): Bad file descriptor 00:34:14.453 [2024-12-09 05:27:28.300440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.453 [2024-12-09 05:27:28.300447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.453 [2024-12-09 05:27:28.300454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.453 [2024-12-09 05:27:28.300460] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.453 [2024-12-09 05:27:28.300465] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.453 [2024-12-09 05:27:28.300469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.453 [2024-12-09 05:27:28.304594] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:14.453 [2024-12-09 05:27:28.304621] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.453 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.713 05:27:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.653 [2024-12-09 05:27:29.644862] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:15.653 [2024-12-09 05:27:29.644883] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:15.653 [2024-12-09 05:27:29.644906] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:15.913 [2024-12-09 05:27:29.733154] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:16.174 [2024-12-09 05:27:30.040009] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:16.174 [2024-12-09 05:27:30.041049] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000396280:1 started. 00:34:16.174 [2024-12-09 05:27:30.042754] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:16.174 [2024-12-09 05:27:30.042790] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.174 [2024-12-09 05:27:30.052946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000396280 was disconnected and freed. delete nvme_qpair. 00:34:16.174 request: 00:34:16.174 { 00:34:16.174 "name": "nvme", 00:34:16.174 "trtype": "tcp", 00:34:16.174 "traddr": "10.0.0.2", 00:34:16.174 "adrfam": "ipv4", 00:34:16.174 "trsvcid": "8009", 00:34:16.174 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:16.174 "wait_for_attach": true, 00:34:16.174 "method": "bdev_nvme_start_discovery", 00:34:16.174 "req_id": 1 00:34:16.174 } 00:34:16.174 Got JSON-RPC error response 00:34:16.174 response: 00:34:16.174 { 00:34:16.174 "code": -17, 00:34:16.174 "message": "File exists" 00:34:16.174 } 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.174 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.434 request: 00:34:16.434 { 00:34:16.434 "name": "nvme_second", 00:34:16.434 "trtype": "tcp", 00:34:16.434 "traddr": "10.0.0.2", 00:34:16.434 "adrfam": "ipv4", 00:34:16.434 "trsvcid": "8009", 00:34:16.434 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:16.434 "wait_for_attach": true, 00:34:16.434 "method": "bdev_nvme_start_discovery", 00:34:16.434 "req_id": 1 00:34:16.434 } 00:34:16.434 Got JSON-RPC error response 00:34:16.434 response: 00:34:16.434 { 00:34:16.434 "code": -17, 00:34:16.434 "message": "File exists" 00:34:16.434 } 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.434 05:27:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.374 [2024-12-09 05:27:31.294351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.374 [2024-12-09 05:27:31.294387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000396780 with addr=10.0.0.2, port=8010 00:34:17.374 [2024-12-09 05:27:31.294422] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:17.375 [2024-12-09 05:27:31.294433] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:17.375 [2024-12-09 05:27:31.294442] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:18.314 [2024-12-09 05:27:32.296539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.314 [2024-12-09 05:27:32.296568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000396a00 with addr=10.0.0.2, port=8010 00:34:18.314 [2024-12-09 05:27:32.296597] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:18.314 [2024-12-09 05:27:32.296605] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:18.314 [2024-12-09 05:27:32.296613] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:19.698 [2024-12-09 05:27:33.298596] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:19.698 request: 00:34:19.698 { 00:34:19.698 "name": "nvme_second", 00:34:19.698 "trtype": "tcp", 00:34:19.698 "traddr": "10.0.0.2", 00:34:19.698 "adrfam": "ipv4", 00:34:19.698 "trsvcid": "8010", 00:34:19.698 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:19.698 "wait_for_attach": false, 00:34:19.698 "attach_timeout_ms": 3000, 00:34:19.698 "method": "bdev_nvme_start_discovery", 00:34:19.698 "req_id": 1 00:34:19.698 } 00:34:19.698 Got JSON-RPC error response 00:34:19.698 response: 00:34:19.698 { 00:34:19.698 "code": -110, 00:34:19.698 "message": "Connection timed out" 00:34:19.698 } 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1762016 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.698 rmmod nvme_tcp 00:34:19.698 rmmod nvme_fabrics 00:34:19.698 rmmod nvme_keyring 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:19.698 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1761928 ']' 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1761928 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1761928 ']' 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1761928 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761928 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761928' 00:34:19.699 killing process with pid 1761928 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1761928 00:34:19.699 05:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1761928 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.380 05:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.390 00:34:22.390 real 0m20.979s 00:34:22.390 user 0m24.497s 00:34:22.390 sys 0m7.364s 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.390 ************************************ 00:34:22.390 END TEST nvmf_host_discovery 00:34:22.390 ************************************ 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.390 ************************************ 00:34:22.390 START TEST nvmf_host_multipath_status 00:34:22.390 ************************************ 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:22.390 * Looking for test storage... 00:34:22.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.390 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.651 --rc genhtml_branch_coverage=1 00:34:22.651 --rc genhtml_function_coverage=1 00:34:22.651 --rc genhtml_legend=1 00:34:22.651 --rc geninfo_all_blocks=1 00:34:22.651 --rc geninfo_unexecuted_blocks=1 00:34:22.651 00:34:22.651 ' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.651 --rc genhtml_branch_coverage=1 00:34:22.651 --rc genhtml_function_coverage=1 00:34:22.651 --rc genhtml_legend=1 00:34:22.651 --rc geninfo_all_blocks=1 00:34:22.651 --rc geninfo_unexecuted_blocks=1 00:34:22.651 00:34:22.651 ' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.651 --rc genhtml_branch_coverage=1 00:34:22.651 --rc genhtml_function_coverage=1 00:34:22.651 --rc genhtml_legend=1 00:34:22.651 --rc geninfo_all_blocks=1 00:34:22.651 --rc geninfo_unexecuted_blocks=1 00:34:22.651 00:34:22.651 ' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.651 --rc genhtml_branch_coverage=1 00:34:22.651 --rc genhtml_function_coverage=1 00:34:22.651 --rc genhtml_legend=1 00:34:22.651 --rc geninfo_all_blocks=1 00:34:22.651 --rc geninfo_unexecuted_blocks=1 00:34:22.651 00:34:22.651 ' 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.651 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.652 05:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.794 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:30.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:30.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:30.795 Found net devices under 0000:31:00.0: cvl_0_0 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:30.795 Found net devices under 0000:31:00.1: cvl_0_1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.795 05:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:34:30.795 00:34:30.795 --- 10.0.0.2 ping statistics --- 00:34:30.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.795 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:30.795 00:34:30.795 --- 10.0.0.1 ping statistics --- 00:34:30.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.795 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1768261 00:34:30.795 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1768261 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1768261 ']' 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.796 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.796 [2024-12-09 05:27:44.180103] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:30.796 [2024-12-09 05:27:44.180233] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.796 [2024-12-09 05:27:44.346268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.796 [2024-12-09 05:27:44.472931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.796 [2024-12-09 05:27:44.472998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.796 [2024-12-09 05:27:44.473012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.796 [2024-12-09 05:27:44.473028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.796 [2024-12-09 05:27:44.473038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.796 [2024-12-09 05:27:44.475635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.796 [2024-12-09 05:27:44.475660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1768261 00:34:31.057 05:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:31.317 [2024-12-09 05:27:45.162017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.317 05:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:31.577 Malloc0 00:34:31.577 05:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:31.837 05:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.099 05:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.099 [2024-12-09 05:27:46.017032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.099 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:32.361 [2024-12-09 05:27:46.217586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1768764 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1768764 /var/tmp/bdevperf.sock 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1768764 ']' 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:32.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.361 05:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:33.307 05:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.307 05:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:33.307 05:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:33.569 05:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:33.829 Nvme0n1 00:34:34.091 05:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:34.351 Nvme0n1 00:34:34.351 05:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:34.351 05:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:36.889 05:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:36.889 05:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:36.889 05:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:36.889 05:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:37.828 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:37.828 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.828 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.828 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.087 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.087 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:38.087 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.087 05:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.087 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.087 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:38.087 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.087 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.347 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.347 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.347 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.347 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.607 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.867 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.867 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:38.867 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:39.126 05:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:39.126 05:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.509 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.770 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.770 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.770 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.770 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.031 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.031 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:41.031 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.031 05:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.031 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.031 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:41.032 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.032 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.312 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.312 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:41.312 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:41.571 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:41.571 05:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:42.953 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:42.953 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:42.953 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.954 05:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:43.214 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.214 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:43.214 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.214 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.474 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:43.734 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.734 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:43.734 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:43.994 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:44.254 05:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:45.195 05:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:45.195 05:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:45.195 05:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.195 05:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:45.195 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.195 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:45.195 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.195 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:45.455 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:45.455 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:45.455 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.455 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:45.716 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.716 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:45.716 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.716 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.976 05:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:46.236 05:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.236 05:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:46.236 05:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:46.495 05:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:46.495 05:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.873 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.134 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.134 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.134 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.134 05:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.397 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.657 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.657 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:48.657 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:48.916 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:48.916 05:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:50.293 05:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:50.293 05:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:50.293 05:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.293 05:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.293 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.552 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.552 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.552 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.552 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.811 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.070 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.070 05:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:51.329 05:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:51.329 05:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:51.329 05:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:51.587 05:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:52.523 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:52.523 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.523 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.523 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.783 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.783 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.783 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.783 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.044 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.044 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.044 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.044 05:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.303 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.563 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.563 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.563 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.563 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.823 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.823 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:53.823 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:53.823 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:54.083 05:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:55.025 05:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:55.025 05:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:55.025 05:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.025 05:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.285 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.285 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:55.285 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.285 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.546 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.807 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.807 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.807 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.807 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.067 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.067 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:56.067 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.067 05:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.067 05:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.067 05:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:56.067 05:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:56.328 05:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:56.589 05:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:57.529 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:57.529 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:57.529 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.529 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.789 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.049 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.049 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.049 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.049 05:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.309 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.309 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.309 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.309 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:58.570 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:58.831 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:59.091 05:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:00.031 05:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:00.031 05:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:00.031 05:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.031 05:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.292 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.555 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.555 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.555 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.555 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.816 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1768764 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1768764 ']' 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1768764 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.078 05:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768764 00:35:01.078 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:01.078 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:01.078 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768764' 00:35:01.078 killing process with pid 1768764 00:35:01.078 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1768764 00:35:01.078 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1768764 00:35:01.078 { 00:35:01.078 "results": [ 00:35:01.078 { 00:35:01.078 "job": "Nvme0n1", 00:35:01.078 "core_mask": "0x4", 00:35:01.078 "workload": "verify", 00:35:01.078 "status": "terminated", 00:35:01.078 "verify_range": { 00:35:01.078 "start": 0, 00:35:01.078 "length": 16384 00:35:01.078 }, 00:35:01.078 "queue_depth": 128, 00:35:01.078 "io_size": 4096, 00:35:01.078 "runtime": 26.614166, 00:35:01.078 "iops": 10767.649078314158, 00:35:01.078 "mibps": 42.06112921216468, 00:35:01.078 "io_failed": 0, 00:35:01.078 "io_timeout": 0, 00:35:01.078 "avg_latency_us": 11867.35385769254, 00:35:01.078 "min_latency_us": 935.2533333333333, 00:35:01.078 "max_latency_us": 3019898.88 00:35:01.078 } 00:35:01.078 ], 00:35:01.078 "core_count": 1 00:35:01.078 } 00:35:01.662 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1768764 00:35:01.662 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:01.662 [2024-12-09 05:27:46.343928] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:01.662 [2024-12-09 05:27:46.344063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768764 ] 00:35:01.662 [2024-12-09 05:27:46.501203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.662 [2024-12-09 05:27:46.623118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.662 Running I/O for 90 seconds... 00:35:01.662 9775.00 IOPS, 38.18 MiB/s [2024-12-09T04:28:15.659Z] 9873.50 IOPS, 38.57 MiB/s [2024-12-09T04:28:15.659Z] 9923.67 IOPS, 38.76 MiB/s [2024-12-09T04:28:15.659Z] 10032.50 IOPS, 39.19 MiB/s [2024-12-09T04:28:15.659Z] 10343.60 IOPS, 40.40 MiB/s [2024-12-09T04:28:15.659Z] 10548.00 IOPS, 41.20 MiB/s [2024-12-09T04:28:15.659Z] 10697.71 IOPS, 41.79 MiB/s [2024-12-09T04:28:15.659Z] 10807.00 IOPS, 42.21 MiB/s [2024-12-09T04:28:15.659Z] 10869.78 IOPS, 42.46 MiB/s [2024-12-09T04:28:15.659Z] 10944.10 IOPS, 42.75 MiB/s [2024-12-09T04:28:15.659Z] 10997.64 IOPS, 42.96 MiB/s [2024-12-09T04:28:15.659Z] [2024-12-09 05:28:00.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.256870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.662 [2024-12-09 05:28:00.257684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.662 [2024-12-09 05:28:00.257834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.662 [2024-12-09 05:28:00.257842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.663 [2024-12-09 05:28:00.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.258992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.663 [2024-12-09 05:28:00.259533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.663 [2024-12-09 05:28:00.259549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.259990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.259997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.664 [2024-12-09 05:28:00.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.664 [2024-12-09 05:28:00.260569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:00.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:00.260595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:00.260615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:00.260623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.665 10924.58 IOPS, 42.67 MiB/s [2024-12-09T04:28:15.662Z] 10084.23 IOPS, 39.39 MiB/s [2024-12-09T04:28:15.662Z] 9363.93 IOPS, 36.58 MiB/s [2024-12-09T04:28:15.662Z] 8830.33 IOPS, 34.49 MiB/s [2024-12-09T04:28:15.662Z] 9003.69 IOPS, 35.17 MiB/s [2024-12-09T04:28:15.662Z] 9162.24 IOPS, 35.79 MiB/s [2024-12-09T04:28:15.662Z] 9533.11 IOPS, 37.24 MiB/s [2024-12-09T04:28:15.662Z] 9858.68 IOPS, 38.51 MiB/s [2024-12-09T04:28:15.662Z] 10031.65 IOPS, 39.19 MiB/s [2024-12-09T04:28:15.662Z] 10103.76 IOPS, 39.47 MiB/s [2024-12-09T04:28:15.662Z] 10170.32 IOPS, 39.73 MiB/s [2024-12-09T04:28:15.662Z] 10414.96 IOPS, 40.68 MiB/s [2024-12-09T04:28:15.662Z] 10635.50 IOPS, 41.54 MiB/s [2024-12-09T04:28:15.662Z] [2024-12-09 05:28:12.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.665 [2024-12-09 05:28:12.826825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.665 [2024-12-09 05:28:12.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.665 [2024-12-09 05:28:12.826995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.827985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.666 [2024-12-09 05:28:12.828256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.666 [2024-12-09 05:28:12.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.666 [2024-12-09 05:28:12.828298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.666 [2024-12-09 05:28:12.828319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.666 [2024-12-09 05:28:12.828362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.666 [2024-12-09 05:28:12.828375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.828384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.828398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.828405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.828427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.828441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.828448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.829855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.829863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.667 [2024-12-09 05:28:12.830581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.830602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.667 [2024-12-09 05:28:12.830615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.667 [2024-12-09 05:28:12.830623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.830864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.830947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.668 [2024-12-09 05:28:12.830954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.668 [2024-12-09 05:28:12.831927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.668 [2024-12-09 05:28:12.831935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.831948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.831955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.831976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.831989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.831997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.832974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.832988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.832995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.669 [2024-12-09 05:28:12.833292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.669 [2024-12-09 05:28:12.833736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.669 [2024-12-09 05:28:12.833743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.833961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.833982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.833995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.834738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.834759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.834780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.834864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.670 [2024-12-09 05:28:12.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.670 [2024-12-09 05:28:12.836252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.670 [2024-12-09 05:28:12.836260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.836579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.836615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.836623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.838113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.838141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.671 [2024-12-09 05:28:12.838423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.671 [2024-12-09 05:28:12.838444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.671 [2024-12-09 05:28:12.838458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.838465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.838494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.838515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.838622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.838656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.838664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.847841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.847867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.847885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.847893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.847908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.847916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.847931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.847938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.847953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.847961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.848790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.848980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.848988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.849002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.849010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.850009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.672 [2024-12-09 05:28:12.850026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.850043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.672 [2024-12-09 05:28:12.850054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.672 [2024-12-09 05:28:12.850068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.850539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.850637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.850645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.852343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.673 [2024-12-09 05:28:12.852527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.852548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.673 [2024-12-09 05:28:12.852561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.673 [2024-12-09 05:28:12.852569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.852932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.852986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.853007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.853014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.853027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.853036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.853050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.853057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.853954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.853971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.853993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.674 [2024-12-09 05:28:12.854254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.674 [2024-12-09 05:28:12.854268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.674 [2024-12-09 05:28:12.854275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.854915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.854978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.854996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.675 [2024-12-09 05:28:12.855674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.675 [2024-12-09 05:28:12.855793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.675 [2024-12-09 05:28:12.855801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.855827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.855869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.855890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.855910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.855931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.855951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.676 [2024-12-09 05:28:12.857595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.676 [2024-12-09 05:28:12.857640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.676 [2024-12-09 05:28:12.857655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.857664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.858862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.858888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.858911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.858934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.858957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.858981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.858996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.859674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.859689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.860418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.860444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.860467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.860490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.860513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.860536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-12-09 05:28:12.860558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.677 [2024-12-09 05:28:12.860573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.677 [2024-12-09 05:28:12.860581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.860977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.860992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.861000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.861014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.861022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.861037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.861045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.861060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.861068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.861083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.861091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.862646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.862843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.862889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.678 [2024-12-09 05:28:12.862933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.678 [2024-12-09 05:28:12.862970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-12-09 05:28:12.862979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.862993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.863935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.863980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.863995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-12-09 05:28:12.864835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.679 [2024-12-09 05:28:12.864853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.679 [2024-12-09 05:28:12.864862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.864876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.864885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.864900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.864908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.866620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-12-09 05:28:12.866627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.868900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.868920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.868937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.868944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.868965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.868972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.868986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.868993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.869006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.869034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.869047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.680 [2024-12-09 05:28:12.869055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.680 [2024-12-09 05:28:12.869068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-12-09 05:28:12.869832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.681 [2024-12-09 05:28:12.869868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.681 [2024-12-09 05:28:12.869876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.869897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.869918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.869939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.869981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.869994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.870002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.870015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.870022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.870036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.870043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.682 [2024-12-09 05:28:12.872875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-12-09 05:28:12.872897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.682 [2024-12-09 05:28:12.872910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.872918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.872932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.872939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.872952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.872960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.872973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.872981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.872994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.873022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.873064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.873085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.873975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.873992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.683 [2024-12-09 05:28:12.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.683 [2024-12-09 05:28:12.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-12-09 05:28:12.874745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.874766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.874876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.874900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.874971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.874984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.874991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.875005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.875012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.875026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.875034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.684 [2024-12-09 05:28:12.876655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-12-09 05:28:12.876677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:01.684 [2024-12-09 05:28:12.876690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.876829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.876871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.876892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.878941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.878959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.685 [2024-12-09 05:28:12.879387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-12-09 05:28:12.879452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:01.685 [2024-12-09 05:28:12.879466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.879800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.879807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.880619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.880642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.880789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.880836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.880850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.880857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.881156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-12-09 05:28:12.881305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.686 [2024-12-09 05:28:12.881865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:01.686 [2024-12-09 05:28:12.881882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.881890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.881904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.881924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.881931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.881944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.881951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.881964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.881972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.881985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.881995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.687 [2024-12-09 05:28:12.882501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:01.687 [2024-12-09 05:28:12.882516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-12-09 05:28:12.882524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:01.687 10726.48 IOPS, 41.90 MiB/s [2024-12-09T04:28:15.684Z] 10750.69 IOPS, 41.99 MiB/s [2024-12-09T04:28:15.684Z] Received shutdown signal, test time was about 26.614799 seconds 00:35:01.687 00:35:01.687 Latency(us) 00:35:01.687 [2024-12-09T04:28:15.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.687 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:01.687 Verification LBA range: start 0x0 length 0x4000 00:35:01.687 Nvme0n1 : 26.61 10767.65 42.06 0.00 0.00 11867.35 935.25 3019898.88 00:35:01.687 [2024-12-09T04:28:15.684Z] =================================================================================================================== 00:35:01.687 [2024-12-09T04:28:15.684Z] Total : 10767.65 42.06 0.00 0.00 11867.35 935.25 3019898.88 00:35:01.687 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.948 rmmod nvme_tcp 00:35:01.948 rmmod nvme_fabrics 00:35:01.948 rmmod nvme_keyring 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1768261 ']' 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1768261 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1768261 ']' 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1768261 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768261 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768261' 00:35:01.948 killing process with pid 1768261 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1768261 00:35:01.948 05:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1768261 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.519 05:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.065 00:35:05.065 real 0m42.273s 00:35:05.065 user 1m47.725s 00:35:05.065 sys 0m12.012s 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:05.065 ************************************ 00:35:05.065 END TEST nvmf_host_multipath_status 00:35:05.065 ************************************ 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.065 ************************************ 00:35:05.065 START TEST nvmf_discovery_remove_ifc 00:35:05.065 ************************************ 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:05.065 * Looking for test storage... 00:35:05.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:05.065 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:05.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.066 --rc genhtml_branch_coverage=1 00:35:05.066 --rc genhtml_function_coverage=1 00:35:05.066 --rc genhtml_legend=1 00:35:05.066 --rc geninfo_all_blocks=1 00:35:05.066 --rc geninfo_unexecuted_blocks=1 00:35:05.066 00:35:05.066 ' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:05.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.066 --rc genhtml_branch_coverage=1 00:35:05.066 --rc genhtml_function_coverage=1 00:35:05.066 --rc genhtml_legend=1 00:35:05.066 --rc geninfo_all_blocks=1 00:35:05.066 --rc geninfo_unexecuted_blocks=1 00:35:05.066 00:35:05.066 ' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:05.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.066 --rc genhtml_branch_coverage=1 00:35:05.066 --rc genhtml_function_coverage=1 00:35:05.066 --rc genhtml_legend=1 00:35:05.066 --rc geninfo_all_blocks=1 00:35:05.066 --rc geninfo_unexecuted_blocks=1 00:35:05.066 00:35:05.066 ' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:05.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.066 --rc genhtml_branch_coverage=1 00:35:05.066 --rc genhtml_function_coverage=1 00:35:05.066 --rc genhtml_legend=1 00:35:05.066 --rc geninfo_all_blocks=1 00:35:05.066 --rc geninfo_unexecuted_blocks=1 00:35:05.066 00:35:05.066 ' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.066 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.067 05:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:13.207 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:13.207 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:13.207 Found net devices under 0000:31:00.0: cvl_0_0 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:13.207 Found net devices under 0000:31:00.1: cvl_0_1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.207 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:35:13.208 00:35:13.208 --- 10.0.0.2 ping statistics --- 00:35:13.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.208 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:35:13.208 00:35:13.208 --- 10.0.0.1 ping statistics --- 00:35:13.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.208 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1778781 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1778781 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1778781 ']' 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.208 05:28:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.208 [2024-12-09 05:28:26.533332] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:13.208 [2024-12-09 05:28:26.533464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.208 [2024-12-09 05:28:26.694071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.208 [2024-12-09 05:28:26.814781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.208 [2024-12-09 05:28:26.814857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.208 [2024-12-09 05:28:26.814874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.208 [2024-12-09 05:28:26.814887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.208 [2024-12-09 05:28:26.814902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.208 [2024-12-09 05:28:26.816393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.471 [2024-12-09 05:28:27.339078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.471 [2024-12-09 05:28:27.347288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:13.471 null0 00:35:13.471 [2024-12-09 05:28:27.379294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1778889 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1778889 /tmp/host.sock 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1778889 ']' 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:13.471 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.471 05:28:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.732 [2024-12-09 05:28:27.484137] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:13.732 [2024-12-09 05:28:27.484243] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778889 ] 00:35:13.732 [2024-12-09 05:28:27.630838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.993 [2024-12-09 05:28:27.731335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:14.253 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.254 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.515 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.515 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:14.515 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.515 05:28:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.905 [2024-12-09 05:28:29.518656] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:15.905 [2024-12-09 05:28:29.518694] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:15.905 [2024-12-09 05:28:29.518723] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:15.905 [2024-12-09 05:28:29.607012] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:15.905 [2024-12-09 05:28:29.790403] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:15.905 [2024-12-09 05:28:29.791698] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000394480:1 started. 00:35:15.905 [2024-12-09 05:28:29.793563] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:15.905 [2024-12-09 05:28:29.793627] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:15.905 [2024-12-09 05:28:29.793677] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:15.905 [2024-12-09 05:28:29.793698] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:15.905 [2024-12-09 05:28:29.793728] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:15.905 [2024-12-09 05:28:29.798487] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000394480 was disconnected and freed. delete nvme_qpair. 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:15.905 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:16.167 05:28:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:16.167 05:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.167 05:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:16.167 05:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:17.110 05:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:18.502 05:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:19.441 05:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:20.382 05:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:21.333 [2024-12-09 05:28:35.233707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:21.333 [2024-12-09 05:28:35.233757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.333 [2024-12-09 05:28:35.233770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.333 [2024-12-09 05:28:35.233782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.333 [2024-12-09 05:28:35.233790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.333 [2024-12-09 05:28:35.233798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.333 [2024-12-09 05:28:35.233805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.333 [2024-12-09 05:28:35.233813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.333 [2024-12-09 05:28:35.233827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.333 [2024-12-09 05:28:35.233835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.333 [2024-12-09 05:28:35.233842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.333 [2024-12-09 05:28:35.233850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393d00 is same with the state(6) to be set 00:35:21.333 [2024-12-09 05:28:35.243724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393d00 (9): Bad file descriptor 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.333 05:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:21.333 [2024-12-09 05:28:35.253764] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:21.333 [2024-12-09 05:28:35.253785] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:21.333 [2024-12-09 05:28:35.253796] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:21.333 [2024-12-09 05:28:35.253804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:21.333 [2024-12-09 05:28:35.253837] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:22.718 [2024-12-09 05:28:36.305897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:22.718 [2024-12-09 05:28:36.306011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393d00 with addr=10.0.0.2, port=4420 00:35:22.718 [2024-12-09 05:28:36.306056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393d00 is same with the state(6) to be set 00:35:22.718 [2024-12-09 05:28:36.306133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393d00 (9): Bad file descriptor 00:35:22.718 [2024-12-09 05:28:36.307620] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:22.718 [2024-12-09 05:28:36.307723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:22.718 [2024-12-09 05:28:36.307757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:22.718 [2024-12-09 05:28:36.307791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:22.718 [2024-12-09 05:28:36.307834] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:22.718 [2024-12-09 05:28:36.307860] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:22.718 [2024-12-09 05:28:36.307882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:22.718 [2024-12-09 05:28:36.307921] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:22.718 [2024-12-09 05:28:36.307943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:22.718 05:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.718 05:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:22.718 05:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:23.661 [2024-12-09 05:28:37.310393] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.661 [2024-12-09 05:28:37.310416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.661 [2024-12-09 05:28:37.310433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.661 [2024-12-09 05:28:37.310443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.661 [2024-12-09 05:28:37.310451] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:23.661 [2024-12-09 05:28:37.310459] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.661 [2024-12-09 05:28:37.310464] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.661 [2024-12-09 05:28:37.310469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.661 [2024-12-09 05:28:37.310498] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:23.661 [2024-12-09 05:28:37.310524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.661 [2024-12-09 05:28:37.310536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.661 [2024-12-09 05:28:37.310547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.661 [2024-12-09 05:28:37.310555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.661 [2024-12-09 05:28:37.310564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.661 [2024-12-09 05:28:37.310571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.661 [2024-12-09 05:28:37.310579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.661 [2024-12-09 05:28:37.310586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.661 [2024-12-09 05:28:37.310594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.661 [2024-12-09 05:28:37.310602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.661 [2024-12-09 05:28:37.310609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:23.661 [2024-12-09 05:28:37.310943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393580 (9): Bad file descriptor 00:35:23.661 [2024-12-09 05:28:37.311959] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:23.661 [2024-12-09 05:28:37.311978] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:23.661 05:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.602 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:24.863 05:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.435 [2024-12-09 05:28:39.362898] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:25.435 [2024-12-09 05:28:39.362921] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:25.435 [2024-12-09 05:28:39.362945] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:25.696 [2024-12-09 05:28:39.451201] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:25.696 05:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.696 [2024-12-09 05:28:39.675496] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:25.696 [2024-12-09 05:28:39.676537] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000395880:1 started. 00:35:25.696 [2024-12-09 05:28:39.677899] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:25.696 [2024-12-09 05:28:39.677939] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:25.696 [2024-12-09 05:28:39.677973] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:25.696 [2024-12-09 05:28:39.677990] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:25.696 [2024-12-09 05:28:39.677999] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:25.696 [2024-12-09 05:28:39.682810] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000395880 was disconnected and freed. delete nvme_qpair. 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1778889 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1778889 ']' 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1778889 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1778889 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1778889' 00:35:27.080 killing process with pid 1778889 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1778889 00:35:27.080 05:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1778889 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.341 rmmod nvme_tcp 00:35:27.341 rmmod nvme_fabrics 00:35:27.341 rmmod nvme_keyring 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1778781 ']' 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1778781 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1778781 ']' 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1778781 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.341 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1778781 00:35:27.602 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.602 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.602 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1778781' 00:35:27.602 killing process with pid 1778781 00:35:27.602 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1778781 00:35:27.602 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1778781 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:28.275 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:28.276 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:28.276 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:28.276 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.276 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.276 05:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:30.316 00:35:30.316 real 0m25.419s 00:35:30.316 user 0m30.580s 00:35:30.316 sys 0m7.422s 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.316 ************************************ 00:35:30.316 END TEST nvmf_discovery_remove_ifc 00:35:30.316 ************************************ 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.316 ************************************ 00:35:30.316 START TEST nvmf_identify_kernel_target 00:35:30.316 ************************************ 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.316 * Looking for test storage... 00:35:30.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.316 --rc genhtml_branch_coverage=1 00:35:30.316 --rc genhtml_function_coverage=1 00:35:30.316 --rc genhtml_legend=1 00:35:30.316 --rc geninfo_all_blocks=1 00:35:30.316 --rc geninfo_unexecuted_blocks=1 00:35:30.316 00:35:30.316 ' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.316 --rc genhtml_branch_coverage=1 00:35:30.316 --rc genhtml_function_coverage=1 00:35:30.316 --rc genhtml_legend=1 00:35:30.316 --rc geninfo_all_blocks=1 00:35:30.316 --rc geninfo_unexecuted_blocks=1 00:35:30.316 00:35:30.316 ' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.316 --rc genhtml_branch_coverage=1 00:35:30.316 --rc genhtml_function_coverage=1 00:35:30.316 --rc genhtml_legend=1 00:35:30.316 --rc geninfo_all_blocks=1 00:35:30.316 --rc geninfo_unexecuted_blocks=1 00:35:30.316 00:35:30.316 ' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.316 --rc genhtml_branch_coverage=1 00:35:30.316 --rc genhtml_function_coverage=1 00:35:30.316 --rc genhtml_legend=1 00:35:30.316 --rc geninfo_all_blocks=1 00:35:30.316 --rc geninfo_unexecuted_blocks=1 00:35:30.316 00:35:30.316 ' 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.316 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.317 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.317 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.317 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.317 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:30.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.577 05:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:38.717 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:38.718 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:38.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:38.718 Found net devices under 0000:31:00.0: cvl_0_0 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:38.718 Found net devices under 0000:31:00.1: cvl_0_1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:38.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:35:38.718 00:35:38.718 --- 10.0.0.2 ping statistics --- 00:35:38.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.718 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:35:38.718 00:35:38.718 --- 10.0.0.1 ping statistics --- 00:35:38.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.718 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:38.718 05:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:41.268 Waiting for block devices as requested 00:35:41.268 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:41.268 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:41.529 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:41.529 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:41.529 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:41.789 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:41.789 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:41.789 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:41.789 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:42.048 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:42.048 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:42.308 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:42.308 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:42.308 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:42.567 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:42.567 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:42.567 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:43.136 No valid GPT data, bailing 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:43.136 05:28:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:35:43.136 00:35:43.136 Discovery Log Number of Records 2, Generation counter 2 00:35:43.136 =====Discovery Log Entry 0====== 00:35:43.136 trtype: tcp 00:35:43.136 adrfam: ipv4 00:35:43.136 subtype: current discovery subsystem 00:35:43.136 treq: not specified, sq flow control disable supported 00:35:43.136 portid: 1 00:35:43.136 trsvcid: 4420 00:35:43.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:43.136 traddr: 10.0.0.1 00:35:43.136 eflags: none 00:35:43.136 sectype: none 00:35:43.136 =====Discovery Log Entry 1====== 00:35:43.136 trtype: tcp 00:35:43.136 adrfam: ipv4 00:35:43.136 subtype: nvme subsystem 00:35:43.136 treq: not specified, sq flow control disable supported 00:35:43.136 portid: 1 00:35:43.136 trsvcid: 4420 00:35:43.136 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:43.136 traddr: 10.0.0.1 00:35:43.136 eflags: none 00:35:43.136 sectype: none 00:35:43.136 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:43.136 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:43.398 ===================================================== 00:35:43.398 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:43.398 ===================================================== 00:35:43.398 Controller Capabilities/Features 00:35:43.398 ================================ 00:35:43.398 Vendor ID: 0000 00:35:43.398 Subsystem Vendor ID: 0000 00:35:43.398 Serial Number: e993d00b81476a031040 00:35:43.398 Model Number: Linux 00:35:43.398 Firmware Version: 6.8.9-20 00:35:43.398 Recommended Arb Burst: 0 00:35:43.398 IEEE OUI Identifier: 00 00 00 00:35:43.398 Multi-path I/O 00:35:43.398 May have multiple subsystem ports: No 00:35:43.398 May have multiple controllers: No 00:35:43.398 Associated with SR-IOV VF: No 00:35:43.398 Max Data Transfer Size: Unlimited 00:35:43.398 Max Number of Namespaces: 0 00:35:43.398 Max Number of I/O Queues: 1024 00:35:43.398 NVMe Specification Version (VS): 1.3 00:35:43.398 NVMe Specification Version (Identify): 1.3 00:35:43.398 Maximum Queue Entries: 1024 00:35:43.398 Contiguous Queues Required: No 00:35:43.398 Arbitration Mechanisms Supported 00:35:43.398 Weighted Round Robin: Not Supported 00:35:43.398 Vendor Specific: Not Supported 00:35:43.398 Reset Timeout: 7500 ms 00:35:43.398 Doorbell Stride: 4 bytes 00:35:43.398 NVM Subsystem Reset: Not Supported 00:35:43.398 Command Sets Supported 00:35:43.398 NVM Command Set: Supported 00:35:43.398 Boot Partition: Not Supported 00:35:43.398 Memory Page Size Minimum: 4096 bytes 00:35:43.398 Memory Page Size Maximum: 4096 bytes 00:35:43.398 Persistent Memory Region: Not Supported 00:35:43.398 Optional Asynchronous Events Supported 00:35:43.398 Namespace Attribute Notices: Not Supported 00:35:43.398 Firmware Activation Notices: Not Supported 00:35:43.398 ANA Change Notices: Not Supported 00:35:43.398 PLE Aggregate Log Change Notices: Not Supported 00:35:43.398 LBA Status Info Alert Notices: Not Supported 00:35:43.398 EGE Aggregate Log Change Notices: Not Supported 00:35:43.398 Normal NVM Subsystem Shutdown event: Not Supported 00:35:43.398 Zone Descriptor Change Notices: Not Supported 00:35:43.398 Discovery Log Change Notices: Supported 00:35:43.398 Controller Attributes 00:35:43.398 128-bit Host Identifier: Not Supported 00:35:43.398 Non-Operational Permissive Mode: Not Supported 00:35:43.398 NVM Sets: Not Supported 00:35:43.398 Read Recovery Levels: Not Supported 00:35:43.398 Endurance Groups: Not Supported 00:35:43.398 Predictable Latency Mode: Not Supported 00:35:43.398 Traffic Based Keep ALive: Not Supported 00:35:43.398 Namespace Granularity: Not Supported 00:35:43.398 SQ Associations: Not Supported 00:35:43.398 UUID List: Not Supported 00:35:43.398 Multi-Domain Subsystem: Not Supported 00:35:43.398 Fixed Capacity Management: Not Supported 00:35:43.398 Variable Capacity Management: Not Supported 00:35:43.398 Delete Endurance Group: Not Supported 00:35:43.398 Delete NVM Set: Not Supported 00:35:43.398 Extended LBA Formats Supported: Not Supported 00:35:43.398 Flexible Data Placement Supported: Not Supported 00:35:43.398 00:35:43.398 Controller Memory Buffer Support 00:35:43.398 ================================ 00:35:43.398 Supported: No 00:35:43.398 00:35:43.398 Persistent Memory Region Support 00:35:43.398 ================================ 00:35:43.398 Supported: No 00:35:43.398 00:35:43.398 Admin Command Set Attributes 00:35:43.398 ============================ 00:35:43.398 Security Send/Receive: Not Supported 00:35:43.398 Format NVM: Not Supported 00:35:43.398 Firmware Activate/Download: Not Supported 00:35:43.398 Namespace Management: Not Supported 00:35:43.398 Device Self-Test: Not Supported 00:35:43.398 Directives: Not Supported 00:35:43.398 NVMe-MI: Not Supported 00:35:43.398 Virtualization Management: Not Supported 00:35:43.398 Doorbell Buffer Config: Not Supported 00:35:43.398 Get LBA Status Capability: Not Supported 00:35:43.398 Command & Feature Lockdown Capability: Not Supported 00:35:43.398 Abort Command Limit: 1 00:35:43.398 Async Event Request Limit: 1 00:35:43.398 Number of Firmware Slots: N/A 00:35:43.398 Firmware Slot 1 Read-Only: N/A 00:35:43.398 Firmware Activation Without Reset: N/A 00:35:43.398 Multiple Update Detection Support: N/A 00:35:43.398 Firmware Update Granularity: No Information Provided 00:35:43.398 Per-Namespace SMART Log: No 00:35:43.399 Asymmetric Namespace Access Log Page: Not Supported 00:35:43.399 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:43.399 Command Effects Log Page: Not Supported 00:35:43.399 Get Log Page Extended Data: Supported 00:35:43.399 Telemetry Log Pages: Not Supported 00:35:43.399 Persistent Event Log Pages: Not Supported 00:35:43.399 Supported Log Pages Log Page: May Support 00:35:43.399 Commands Supported & Effects Log Page: Not Supported 00:35:43.399 Feature Identifiers & Effects Log Page:May Support 00:35:43.399 NVMe-MI Commands & Effects Log Page: May Support 00:35:43.399 Data Area 4 for Telemetry Log: Not Supported 00:35:43.399 Error Log Page Entries Supported: 1 00:35:43.399 Keep Alive: Not Supported 00:35:43.399 00:35:43.399 NVM Command Set Attributes 00:35:43.399 ========================== 00:35:43.399 Submission Queue Entry Size 00:35:43.399 Max: 1 00:35:43.399 Min: 1 00:35:43.399 Completion Queue Entry Size 00:35:43.399 Max: 1 00:35:43.399 Min: 1 00:35:43.399 Number of Namespaces: 0 00:35:43.399 Compare Command: Not Supported 00:35:43.399 Write Uncorrectable Command: Not Supported 00:35:43.399 Dataset Management Command: Not Supported 00:35:43.399 Write Zeroes Command: Not Supported 00:35:43.399 Set Features Save Field: Not Supported 00:35:43.399 Reservations: Not Supported 00:35:43.399 Timestamp: Not Supported 00:35:43.399 Copy: Not Supported 00:35:43.399 Volatile Write Cache: Not Present 00:35:43.399 Atomic Write Unit (Normal): 1 00:35:43.399 Atomic Write Unit (PFail): 1 00:35:43.399 Atomic Compare & Write Unit: 1 00:35:43.399 Fused Compare & Write: Not Supported 00:35:43.399 Scatter-Gather List 00:35:43.399 SGL Command Set: Supported 00:35:43.399 SGL Keyed: Not Supported 00:35:43.399 SGL Bit Bucket Descriptor: Not Supported 00:35:43.399 SGL Metadata Pointer: Not Supported 00:35:43.399 Oversized SGL: Not Supported 00:35:43.399 SGL Metadata Address: Not Supported 00:35:43.399 SGL Offset: Supported 00:35:43.399 Transport SGL Data Block: Not Supported 00:35:43.399 Replay Protected Memory Block: Not Supported 00:35:43.399 00:35:43.399 Firmware Slot Information 00:35:43.399 ========================= 00:35:43.399 Active slot: 0 00:35:43.399 00:35:43.399 00:35:43.399 Error Log 00:35:43.399 ========= 00:35:43.399 00:35:43.399 Active Namespaces 00:35:43.399 ================= 00:35:43.399 Discovery Log Page 00:35:43.399 ================== 00:35:43.399 Generation Counter: 2 00:35:43.399 Number of Records: 2 00:35:43.399 Record Format: 0 00:35:43.399 00:35:43.399 Discovery Log Entry 0 00:35:43.399 ---------------------- 00:35:43.399 Transport Type: 3 (TCP) 00:35:43.399 Address Family: 1 (IPv4) 00:35:43.399 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:43.399 Entry Flags: 00:35:43.399 Duplicate Returned Information: 0 00:35:43.399 Explicit Persistent Connection Support for Discovery: 0 00:35:43.399 Transport Requirements: 00:35:43.399 Secure Channel: Not Specified 00:35:43.399 Port ID: 1 (0x0001) 00:35:43.399 Controller ID: 65535 (0xffff) 00:35:43.399 Admin Max SQ Size: 32 00:35:43.399 Transport Service Identifier: 4420 00:35:43.399 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:43.399 Transport Address: 10.0.0.1 00:35:43.399 Discovery Log Entry 1 00:35:43.399 ---------------------- 00:35:43.399 Transport Type: 3 (TCP) 00:35:43.399 Address Family: 1 (IPv4) 00:35:43.399 Subsystem Type: 2 (NVM Subsystem) 00:35:43.399 Entry Flags: 00:35:43.399 Duplicate Returned Information: 0 00:35:43.399 Explicit Persistent Connection Support for Discovery: 0 00:35:43.399 Transport Requirements: 00:35:43.399 Secure Channel: Not Specified 00:35:43.399 Port ID: 1 (0x0001) 00:35:43.399 Controller ID: 65535 (0xffff) 00:35:43.399 Admin Max SQ Size: 32 00:35:43.399 Transport Service Identifier: 4420 00:35:43.399 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:43.399 Transport Address: 10.0.0.1 00:35:43.399 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.399 get_feature(0x01) failed 00:35:43.399 get_feature(0x02) failed 00:35:43.399 get_feature(0x04) failed 00:35:43.399 ===================================================== 00:35:43.399 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.399 ===================================================== 00:35:43.399 Controller Capabilities/Features 00:35:43.399 ================================ 00:35:43.399 Vendor ID: 0000 00:35:43.399 Subsystem Vendor ID: 0000 00:35:43.399 Serial Number: ff6360f3ea46561a4a72 00:35:43.399 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:43.399 Firmware Version: 6.8.9-20 00:35:43.399 Recommended Arb Burst: 6 00:35:43.399 IEEE OUI Identifier: 00 00 00 00:35:43.399 Multi-path I/O 00:35:43.399 May have multiple subsystem ports: Yes 00:35:43.399 May have multiple controllers: Yes 00:35:43.399 Associated with SR-IOV VF: No 00:35:43.399 Max Data Transfer Size: Unlimited 00:35:43.399 Max Number of Namespaces: 1024 00:35:43.399 Max Number of I/O Queues: 128 00:35:43.399 NVMe Specification Version (VS): 1.3 00:35:43.399 NVMe Specification Version (Identify): 1.3 00:35:43.399 Maximum Queue Entries: 1024 00:35:43.399 Contiguous Queues Required: No 00:35:43.399 Arbitration Mechanisms Supported 00:35:43.399 Weighted Round Robin: Not Supported 00:35:43.399 Vendor Specific: Not Supported 00:35:43.399 Reset Timeout: 7500 ms 00:35:43.399 Doorbell Stride: 4 bytes 00:35:43.399 NVM Subsystem Reset: Not Supported 00:35:43.399 Command Sets Supported 00:35:43.399 NVM Command Set: Supported 00:35:43.399 Boot Partition: Not Supported 00:35:43.399 Memory Page Size Minimum: 4096 bytes 00:35:43.399 Memory Page Size Maximum: 4096 bytes 00:35:43.399 Persistent Memory Region: Not Supported 00:35:43.399 Optional Asynchronous Events Supported 00:35:43.399 Namespace Attribute Notices: Supported 00:35:43.399 Firmware Activation Notices: Not Supported 00:35:43.399 ANA Change Notices: Supported 00:35:43.399 PLE Aggregate Log Change Notices: Not Supported 00:35:43.399 LBA Status Info Alert Notices: Not Supported 00:35:43.399 EGE Aggregate Log Change Notices: Not Supported 00:35:43.399 Normal NVM Subsystem Shutdown event: Not Supported 00:35:43.399 Zone Descriptor Change Notices: Not Supported 00:35:43.399 Discovery Log Change Notices: Not Supported 00:35:43.399 Controller Attributes 00:35:43.400 128-bit Host Identifier: Supported 00:35:43.400 Non-Operational Permissive Mode: Not Supported 00:35:43.400 NVM Sets: Not Supported 00:35:43.400 Read Recovery Levels: Not Supported 00:35:43.400 Endurance Groups: Not Supported 00:35:43.400 Predictable Latency Mode: Not Supported 00:35:43.400 Traffic Based Keep ALive: Supported 00:35:43.400 Namespace Granularity: Not Supported 00:35:43.400 SQ Associations: Not Supported 00:35:43.400 UUID List: Not Supported 00:35:43.400 Multi-Domain Subsystem: Not Supported 00:35:43.400 Fixed Capacity Management: Not Supported 00:35:43.400 Variable Capacity Management: Not Supported 00:35:43.400 Delete Endurance Group: Not Supported 00:35:43.400 Delete NVM Set: Not Supported 00:35:43.400 Extended LBA Formats Supported: Not Supported 00:35:43.400 Flexible Data Placement Supported: Not Supported 00:35:43.400 00:35:43.400 Controller Memory Buffer Support 00:35:43.400 ================================ 00:35:43.400 Supported: No 00:35:43.400 00:35:43.400 Persistent Memory Region Support 00:35:43.400 ================================ 00:35:43.400 Supported: No 00:35:43.400 00:35:43.400 Admin Command Set Attributes 00:35:43.400 ============================ 00:35:43.400 Security Send/Receive: Not Supported 00:35:43.400 Format NVM: Not Supported 00:35:43.400 Firmware Activate/Download: Not Supported 00:35:43.400 Namespace Management: Not Supported 00:35:43.400 Device Self-Test: Not Supported 00:35:43.400 Directives: Not Supported 00:35:43.400 NVMe-MI: Not Supported 00:35:43.400 Virtualization Management: Not Supported 00:35:43.400 Doorbell Buffer Config: Not Supported 00:35:43.400 Get LBA Status Capability: Not Supported 00:35:43.400 Command & Feature Lockdown Capability: Not Supported 00:35:43.400 Abort Command Limit: 4 00:35:43.400 Async Event Request Limit: 4 00:35:43.400 Number of Firmware Slots: N/A 00:35:43.400 Firmware Slot 1 Read-Only: N/A 00:35:43.400 Firmware Activation Without Reset: N/A 00:35:43.400 Multiple Update Detection Support: N/A 00:35:43.400 Firmware Update Granularity: No Information Provided 00:35:43.400 Per-Namespace SMART Log: Yes 00:35:43.400 Asymmetric Namespace Access Log Page: Supported 00:35:43.400 ANA Transition Time : 10 sec 00:35:43.400 00:35:43.400 Asymmetric Namespace Access Capabilities 00:35:43.400 ANA Optimized State : Supported 00:35:43.400 ANA Non-Optimized State : Supported 00:35:43.400 ANA Inaccessible State : Supported 00:35:43.400 ANA Persistent Loss State : Supported 00:35:43.400 ANA Change State : Supported 00:35:43.400 ANAGRPID is not changed : No 00:35:43.400 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:43.400 00:35:43.400 ANA Group Identifier Maximum : 128 00:35:43.400 Number of ANA Group Identifiers : 128 00:35:43.400 Max Number of Allowed Namespaces : 1024 00:35:43.400 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:43.400 Command Effects Log Page: Supported 00:35:43.400 Get Log Page Extended Data: Supported 00:35:43.400 Telemetry Log Pages: Not Supported 00:35:43.400 Persistent Event Log Pages: Not Supported 00:35:43.400 Supported Log Pages Log Page: May Support 00:35:43.400 Commands Supported & Effects Log Page: Not Supported 00:35:43.400 Feature Identifiers & Effects Log Page:May Support 00:35:43.400 NVMe-MI Commands & Effects Log Page: May Support 00:35:43.400 Data Area 4 for Telemetry Log: Not Supported 00:35:43.400 Error Log Page Entries Supported: 128 00:35:43.400 Keep Alive: Supported 00:35:43.400 Keep Alive Granularity: 1000 ms 00:35:43.400 00:35:43.400 NVM Command Set Attributes 00:35:43.400 ========================== 00:35:43.400 Submission Queue Entry Size 00:35:43.400 Max: 64 00:35:43.400 Min: 64 00:35:43.400 Completion Queue Entry Size 00:35:43.400 Max: 16 00:35:43.400 Min: 16 00:35:43.400 Number of Namespaces: 1024 00:35:43.400 Compare Command: Not Supported 00:35:43.400 Write Uncorrectable Command: Not Supported 00:35:43.400 Dataset Management Command: Supported 00:35:43.400 Write Zeroes Command: Supported 00:35:43.400 Set Features Save Field: Not Supported 00:35:43.400 Reservations: Not Supported 00:35:43.400 Timestamp: Not Supported 00:35:43.400 Copy: Not Supported 00:35:43.400 Volatile Write Cache: Present 00:35:43.400 Atomic Write Unit (Normal): 1 00:35:43.400 Atomic Write Unit (PFail): 1 00:35:43.400 Atomic Compare & Write Unit: 1 00:35:43.400 Fused Compare & Write: Not Supported 00:35:43.400 Scatter-Gather List 00:35:43.400 SGL Command Set: Supported 00:35:43.400 SGL Keyed: Not Supported 00:35:43.400 SGL Bit Bucket Descriptor: Not Supported 00:35:43.400 SGL Metadata Pointer: Not Supported 00:35:43.400 Oversized SGL: Not Supported 00:35:43.400 SGL Metadata Address: Not Supported 00:35:43.400 SGL Offset: Supported 00:35:43.400 Transport SGL Data Block: Not Supported 00:35:43.400 Replay Protected Memory Block: Not Supported 00:35:43.400 00:35:43.400 Firmware Slot Information 00:35:43.400 ========================= 00:35:43.400 Active slot: 0 00:35:43.400 00:35:43.400 Asymmetric Namespace Access 00:35:43.400 =========================== 00:35:43.400 Change Count : 0 00:35:43.400 Number of ANA Group Descriptors : 1 00:35:43.400 ANA Group Descriptor : 0 00:35:43.400 ANA Group ID : 1 00:35:43.400 Number of NSID Values : 1 00:35:43.400 Change Count : 0 00:35:43.400 ANA State : 1 00:35:43.400 Namespace Identifier : 1 00:35:43.400 00:35:43.400 Commands Supported and Effects 00:35:43.400 ============================== 00:35:43.400 Admin Commands 00:35:43.400 -------------- 00:35:43.400 Get Log Page (02h): Supported 00:35:43.400 Identify (06h): Supported 00:35:43.400 Abort (08h): Supported 00:35:43.400 Set Features (09h): Supported 00:35:43.400 Get Features (0Ah): Supported 00:35:43.400 Asynchronous Event Request (0Ch): Supported 00:35:43.400 Keep Alive (18h): Supported 00:35:43.400 I/O Commands 00:35:43.400 ------------ 00:35:43.400 Flush (00h): Supported 00:35:43.400 Write (01h): Supported LBA-Change 00:35:43.400 Read (02h): Supported 00:35:43.400 Write Zeroes (08h): Supported LBA-Change 00:35:43.401 Dataset Management (09h): Supported 00:35:43.401 00:35:43.401 Error Log 00:35:43.401 ========= 00:35:43.401 Entry: 0 00:35:43.401 Error Count: 0x3 00:35:43.401 Submission Queue Id: 0x0 00:35:43.401 Command Id: 0x5 00:35:43.401 Phase Bit: 0 00:35:43.401 Status Code: 0x2 00:35:43.401 Status Code Type: 0x0 00:35:43.401 Do Not Retry: 1 00:35:43.401 Error Location: 0x28 00:35:43.401 LBA: 0x0 00:35:43.401 Namespace: 0x0 00:35:43.401 Vendor Log Page: 0x0 00:35:43.401 ----------- 00:35:43.401 Entry: 1 00:35:43.401 Error Count: 0x2 00:35:43.401 Submission Queue Id: 0x0 00:35:43.401 Command Id: 0x5 00:35:43.401 Phase Bit: 0 00:35:43.401 Status Code: 0x2 00:35:43.401 Status Code Type: 0x0 00:35:43.401 Do Not Retry: 1 00:35:43.401 Error Location: 0x28 00:35:43.401 LBA: 0x0 00:35:43.401 Namespace: 0x0 00:35:43.401 Vendor Log Page: 0x0 00:35:43.401 ----------- 00:35:43.401 Entry: 2 00:35:43.401 Error Count: 0x1 00:35:43.401 Submission Queue Id: 0x0 00:35:43.401 Command Id: 0x4 00:35:43.401 Phase Bit: 0 00:35:43.401 Status Code: 0x2 00:35:43.401 Status Code Type: 0x0 00:35:43.401 Do Not Retry: 1 00:35:43.401 Error Location: 0x28 00:35:43.401 LBA: 0x0 00:35:43.401 Namespace: 0x0 00:35:43.401 Vendor Log Page: 0x0 00:35:43.401 00:35:43.401 Number of Queues 00:35:43.401 ================ 00:35:43.401 Number of I/O Submission Queues: 128 00:35:43.401 Number of I/O Completion Queues: 128 00:35:43.401 00:35:43.401 ZNS Specific Controller Data 00:35:43.401 ============================ 00:35:43.401 Zone Append Size Limit: 0 00:35:43.401 00:35:43.401 00:35:43.401 Active Namespaces 00:35:43.401 ================= 00:35:43.401 get_feature(0x05) failed 00:35:43.401 Namespace ID:1 00:35:43.401 Command Set Identifier: NVM (00h) 00:35:43.401 Deallocate: Supported 00:35:43.401 Deallocated/Unwritten Error: Not Supported 00:35:43.401 Deallocated Read Value: Unknown 00:35:43.401 Deallocate in Write Zeroes: Not Supported 00:35:43.401 Deallocated Guard Field: 0xFFFF 00:35:43.401 Flush: Supported 00:35:43.401 Reservation: Not Supported 00:35:43.401 Namespace Sharing Capabilities: Multiple Controllers 00:35:43.401 Size (in LBAs): 3750748848 (1788GiB) 00:35:43.401 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:43.401 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:43.401 UUID: 8017032c-d7f0-4fe4-8362-f7ccfaf0db34 00:35:43.401 Thin Provisioning: Not Supported 00:35:43.401 Per-NS Atomic Units: Yes 00:35:43.401 Atomic Write Unit (Normal): 8 00:35:43.401 Atomic Write Unit (PFail): 8 00:35:43.401 Preferred Write Granularity: 8 00:35:43.401 Atomic Compare & Write Unit: 8 00:35:43.401 Atomic Boundary Size (Normal): 0 00:35:43.401 Atomic Boundary Size (PFail): 0 00:35:43.401 Atomic Boundary Offset: 0 00:35:43.401 NGUID/EUI64 Never Reused: No 00:35:43.401 ANA group ID: 1 00:35:43.401 Namespace Write Protected: No 00:35:43.401 Number of LBA Formats: 1 00:35:43.401 Current LBA Format: LBA Format #00 00:35:43.401 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:43.401 00:35:43.401 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:43.401 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:43.401 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.719 rmmod nvme_tcp 00:35:43.719 rmmod nvme_fabrics 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.719 05:28:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:45.630 05:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:49.840 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:49.840 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:49.840 00:35:49.840 real 0m19.569s 00:35:49.840 user 0m5.300s 00:35:49.840 sys 0m11.213s 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:49.840 ************************************ 00:35:49.840 END TEST nvmf_identify_kernel_target 00:35:49.840 ************************************ 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.840 ************************************ 00:35:49.840 START TEST nvmf_auth_host 00:35:49.840 ************************************ 00:35:49.840 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:50.101 * Looking for test storage... 00:35:50.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.101 --rc genhtml_branch_coverage=1 00:35:50.101 --rc genhtml_function_coverage=1 00:35:50.101 --rc genhtml_legend=1 00:35:50.101 --rc geninfo_all_blocks=1 00:35:50.101 --rc geninfo_unexecuted_blocks=1 00:35:50.101 00:35:50.101 ' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.101 --rc genhtml_branch_coverage=1 00:35:50.101 --rc genhtml_function_coverage=1 00:35:50.101 --rc genhtml_legend=1 00:35:50.101 --rc geninfo_all_blocks=1 00:35:50.101 --rc geninfo_unexecuted_blocks=1 00:35:50.101 00:35:50.101 ' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.101 --rc genhtml_branch_coverage=1 00:35:50.101 --rc genhtml_function_coverage=1 00:35:50.101 --rc genhtml_legend=1 00:35:50.101 --rc geninfo_all_blocks=1 00:35:50.101 --rc geninfo_unexecuted_blocks=1 00:35:50.101 00:35:50.101 ' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.101 --rc genhtml_branch_coverage=1 00:35:50.101 --rc genhtml_function_coverage=1 00:35:50.101 --rc genhtml_legend=1 00:35:50.101 --rc geninfo_all_blocks=1 00:35:50.101 --rc geninfo_unexecuted_blocks=1 00:35:50.101 00:35:50.101 ' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.101 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:50.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.102 05:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.272 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:58.273 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:58.273 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:58.273 Found net devices under 0000:31:00.0: cvl_0_0 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:58.273 Found net devices under 0000:31:00.1: cvl_0_1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:35:58.273 00:35:58.273 --- 10.0.0.2 ping statistics --- 00:35:58.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.273 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:35:58.273 00:35:58.273 --- 10.0.0.1 ping statistics --- 00:35:58.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.273 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1793574 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1793574 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1793574 ']' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:58.273 05:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21ca91121fd02c18484671da4a6c6774 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RRy 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21ca91121fd02c18484671da4a6c6774 0 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21ca91121fd02c18484671da4a6c6774 0 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21ca91121fd02c18484671da4a6c6774 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:58.535 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RRy 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RRy 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RRy 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38dfc0371f7a3b39f142cd2b1d8fab3fe1f0c1ed96dda0ba7f99f73ea1ce8471 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xCR 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38dfc0371f7a3b39f142cd2b1d8fab3fe1f0c1ed96dda0ba7f99f73ea1ce8471 3 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38dfc0371f7a3b39f142cd2b1d8fab3fe1f0c1ed96dda0ba7f99f73ea1ce8471 3 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38dfc0371f7a3b39f142cd2b1d8fab3fe1f0c1ed96dda0ba7f99f73ea1ce8471 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xCR 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xCR 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xCR 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6876a7077a84526fed2eebb71f81266b8fa7c4bff0b69b87 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ose 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6876a7077a84526fed2eebb71f81266b8fa7c4bff0b69b87 0 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6876a7077a84526fed2eebb71f81266b8fa7c4bff0b69b87 0 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6876a7077a84526fed2eebb71f81266b8fa7c4bff0b69b87 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ose 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ose 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ose 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:58.797 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=39e39873673d465c9a1f830005b8263933b1d2d6da65871e 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EkL 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 39e39873673d465c9a1f830005b8263933b1d2d6da65871e 2 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 39e39873673d465c9a1f830005b8263933b1d2d6da65871e 2 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=39e39873673d465c9a1f830005b8263933b1d2d6da65871e 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EkL 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EkL 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.EkL 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:58.798 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0907d8beb15c6d22ef1b7364f9302433 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RUi 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0907d8beb15c6d22ef1b7364f9302433 1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0907d8beb15c6d22ef1b7364f9302433 1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0907d8beb15c6d22ef1b7364f9302433 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RUi 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RUi 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RUi 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f6c38cad8f00bd54f57c5eec2b6e45f4 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JDB 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f6c38cad8f00bd54f57c5eec2b6e45f4 1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f6c38cad8f00bd54f57c5eec2b6e45f4 1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f6c38cad8f00bd54f57c5eec2b6e45f4 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JDB 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JDB 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JDB 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0819b73be5fac9d1cabcec487cb54789cd3c2c7acd6bf5d6 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ext 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0819b73be5fac9d1cabcec487cb54789cd3c2c7acd6bf5d6 2 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0819b73be5fac9d1cabcec487cb54789cd3c2c7acd6bf5d6 2 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0819b73be5fac9d1cabcec487cb54789cd3c2c7acd6bf5d6 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ext 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ext 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ext 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c7d79911f174533ab7ac4f0a40ce506 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yHz 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c7d79911f174533ab7ac4f0a40ce506 0 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c7d79911f174533ab7ac4f0a40ce506 0 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c7d79911f174533ab7ac4f0a40ce506 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:59.060 05:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:59.060 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yHz 00:35:59.060 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yHz 00:35:59.060 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yHz 00:35:59.060 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:59.061 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:59.061 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.061 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:59.061 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:59.061 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce51c5fcdfbe8acc7d6f0909695348b050e233b9da0a7c42628034f27f2ef0fe 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nmg 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce51c5fcdfbe8acc7d6f0909695348b050e233b9da0a7c42628034f27f2ef0fe 3 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce51c5fcdfbe8acc7d6f0909695348b050e233b9da0a7c42628034f27f2ef0fe 3 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce51c5fcdfbe8acc7d6f0909695348b050e233b9da0a7c42628034f27f2ef0fe 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nmg 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nmg 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nmg 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1793574 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1793574 ']' 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.322 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.323 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.323 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RRy 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xCR ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xCR 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ose 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.EkL ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EkL 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RUi 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JDB ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JDB 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ext 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yHz ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yHz 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nmg 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:59.586 05:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:02.891 Waiting for block devices as requested 00:36:03.152 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:03.152 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:03.152 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:03.152 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:03.414 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:03.414 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:03.414 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:03.675 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:03.675 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:03.675 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:03.937 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:03.937 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:03.937 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:03.937 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:04.205 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:04.205 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:04.205 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:05.164 No valid GPT data, bailing 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:05.164 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:36:05.424 00:36:05.424 Discovery Log Number of Records 2, Generation counter 2 00:36:05.424 =====Discovery Log Entry 0====== 00:36:05.424 trtype: tcp 00:36:05.424 adrfam: ipv4 00:36:05.424 subtype: current discovery subsystem 00:36:05.424 treq: not specified, sq flow control disable supported 00:36:05.424 portid: 1 00:36:05.424 trsvcid: 4420 00:36:05.424 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:05.424 traddr: 10.0.0.1 00:36:05.424 eflags: none 00:36:05.424 sectype: none 00:36:05.424 =====Discovery Log Entry 1====== 00:36:05.424 trtype: tcp 00:36:05.424 adrfam: ipv4 00:36:05.424 subtype: nvme subsystem 00:36:05.424 treq: not specified, sq flow control disable supported 00:36:05.424 portid: 1 00:36:05.424 trsvcid: 4420 00:36:05.424 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:05.424 traddr: 10.0.0.1 00:36:05.424 eflags: none 00:36:05.424 sectype: none 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.424 nvme0n1 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.424 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.684 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.685 nvme0n1 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.685 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 nvme0n1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.945 05:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.206 nvme0n1 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.206 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.467 nvme0n1 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.467 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.468 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.729 nvme0n1 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.729 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.990 nvme0n1 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:06.990 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.991 05:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 nvme0n1 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:07.253 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.254 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.517 nvme0n1 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.517 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.518 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.779 nvme0n1 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.779 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.039 nvme0n1 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.039 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.040 05:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.300 nvme0n1 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.300 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.560 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 nvme0n1 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.821 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.822 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.822 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.822 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.083 nvme0n1 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.083 05:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.344 nvme0n1 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.344 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.345 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.605 nvme0n1 00:36:09.605 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.605 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.605 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.605 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.605 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.866 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.867 05:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.128 nvme0n1 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.128 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.389 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.390 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.651 nvme0n1 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.651 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.912 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.913 05:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 nvme0n1 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.175 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.746 nvme0n1 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.746 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.747 05:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.320 nvme0n1 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.320 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.894 nvme0n1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.894 05:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 nvme0n1 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 05:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.414 nvme0n1 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.414 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.985 nvme0n1 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.985 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.246 05:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.246 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.247 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 nvme0n1 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.820 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.821 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.821 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.081 nvme0n1 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.082 05:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.082 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 nvme0n1 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.344 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 nvme0n1 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.605 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.866 nvme0n1 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.866 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.867 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.127 nvme0n1 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.127 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.128 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.128 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.128 05:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.388 nvme0n1 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.388 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.389 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.649 nvme0n1 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.649 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.650 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 nvme0n1 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 nvme0n1 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.219 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.220 05:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.220 nvme0n1 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.220 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.480 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.740 nvme0n1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.740 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.001 nvme0n1 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.001 05:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.262 nvme0n1 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.262 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.523 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.783 nvme0n1 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.783 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.784 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.044 nvme0n1 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.044 05:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.615 nvme0n1 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.615 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.616 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.186 nvme0n1 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.187 05:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.447 nvme0n1 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.447 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.707 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.967 nvme0n1 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.967 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.228 05:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.489 nvme0n1 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.489 05:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.431 nvme0n1 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.431 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.432 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.002 nvme0n1 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.002 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.003 05:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.574 nvme0n1 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.574 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.836 05:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.407 nvme0n1 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.407 05:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.350 nvme0n1 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.350 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 nvme0n1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.613 nvme0n1 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.613 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.614 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.876 nvme0n1 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.876 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 nvme0n1 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.139 05:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.139 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.401 nvme0n1 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.401 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.662 nvme0n1 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.662 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.663 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.923 nvme0n1 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.923 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.183 nvme0n1 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.183 05:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.183 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.444 nvme0n1 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.444 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.705 nvme0n1 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.705 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.706 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.966 nvme0n1 00:36:28.966 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.967 05:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.227 nvme0n1 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.227 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.516 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.517 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.783 nvme0n1 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.783 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.121 nvme0n1 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.121 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.122 05:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.384 nvme0n1 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.384 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.953 nvme0n1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.953 05:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.531 nvme0n1 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.531 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.791 nvme0n1 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.791 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.050 05:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.310 nvme0n1 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.310 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.571 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.833 nvme0n1 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYTkxMTIxZmQwMmMxODQ4NDY3MWRhNGE2YzY3NzTm5CtX: 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzhkZmMwMzcxZjdhM2IzOWYxNDJjZDJiMWQ4ZmFiM2ZlMWYwYzFlZDk2ZGRhMGJhN2Y5OWY3M2VhMWNlODQ3Me5pVlg=: 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.833 05:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.776 nvme0n1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.776 05:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 nvme0n1 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.348 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.919 nvme0n1 00:36:34.919 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.919 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.919 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.919 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.919 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDgxOWI3M2JlNWZhYzlkMWNhYmNlYzQ4N2NiNTQ3ODljZDNjMmM3YWNkNmJmNWQ2IsJ6Fw==: 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGM3ZDc5OTExZjE3NDUzM2FiN2FjNGYwYTQwY2U1MDZsbJru: 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.180 05:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.750 nvme0n1 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2U1MWM1ZmNkZmJlOGFjYzdkNmYwOTA5Njk1MzQ4YjA1MGUyMzNiOWRhMGE3YzQyNjI4MDM0ZjI3ZjJlZjBmZeX3IBg=: 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.750 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.751 05:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 nvme0n1 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 request: 00:36:36.693 { 00:36:36.693 "name": "nvme0", 00:36:36.693 "trtype": "tcp", 00:36:36.693 "traddr": "10.0.0.1", 00:36:36.693 "adrfam": "ipv4", 00:36:36.693 "trsvcid": "4420", 00:36:36.693 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:36.693 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:36.693 "prchk_reftag": false, 00:36:36.693 "prchk_guard": false, 00:36:36.693 "hdgst": false, 00:36:36.693 "ddgst": false, 00:36:36.693 "allow_unrecognized_csi": false, 00:36:36.693 "method": "bdev_nvme_attach_controller", 00:36:36.693 "req_id": 1 00:36:36.693 } 00:36:36.693 Got JSON-RPC error response 00:36:36.693 response: 00:36:36.693 { 00:36:36.693 "code": -5, 00:36:36.693 "message": "Input/output error" 00:36:36.693 } 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:36.693 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.694 request: 00:36:36.694 { 00:36:36.694 "name": "nvme0", 00:36:36.694 "trtype": "tcp", 00:36:36.694 "traddr": "10.0.0.1", 00:36:36.694 "adrfam": "ipv4", 00:36:36.694 "trsvcid": "4420", 00:36:36.694 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:36.694 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:36.694 "prchk_reftag": false, 00:36:36.694 "prchk_guard": false, 00:36:36.694 "hdgst": false, 00:36:36.694 "ddgst": false, 00:36:36.694 "dhchap_key": "key2", 00:36:36.694 "allow_unrecognized_csi": false, 00:36:36.694 "method": "bdev_nvme_attach_controller", 00:36:36.694 "req_id": 1 00:36:36.694 } 00:36:36.694 Got JSON-RPC error response 00:36:36.694 response: 00:36:36.694 { 00:36:36.694 "code": -5, 00:36:36.694 "message": "Input/output error" 00:36:36.694 } 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.694 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.955 request: 00:36:36.955 { 00:36:36.955 "name": "nvme0", 00:36:36.955 "trtype": "tcp", 00:36:36.955 "traddr": "10.0.0.1", 00:36:36.955 "adrfam": "ipv4", 00:36:36.955 "trsvcid": "4420", 00:36:36.955 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:36.955 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:36.955 "prchk_reftag": false, 00:36:36.955 "prchk_guard": false, 00:36:36.956 "hdgst": false, 00:36:36.956 "ddgst": false, 00:36:36.956 "dhchap_key": "key1", 00:36:36.956 "dhchap_ctrlr_key": "ckey2", 00:36:36.956 "allow_unrecognized_csi": false, 00:36:36.956 "method": "bdev_nvme_attach_controller", 00:36:36.956 "req_id": 1 00:36:36.956 } 00:36:36.956 Got JSON-RPC error response 00:36:36.956 response: 00:36:36.956 { 00:36:36.956 "code": -5, 00:36:36.956 "message": "Input/output error" 00:36:36.956 } 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.956 nvme0n1 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.956 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.226 05:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.226 request: 00:36:37.226 { 00:36:37.226 "name": "nvme0", 00:36:37.226 "dhchap_key": "key1", 00:36:37.226 "dhchap_ctrlr_key": "ckey2", 00:36:37.226 "method": "bdev_nvme_set_keys", 00:36:37.226 "req_id": 1 00:36:37.226 } 00:36:37.226 Got JSON-RPC error response 00:36:37.226 response: 00:36:37.226 { 00:36:37.226 "code": -13, 00:36:37.226 "message": "Permission denied" 00:36:37.226 } 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:37.226 05:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:38.169 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.169 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:38.169 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.169 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.169 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.428 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:38.428 05:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:39.368 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.368 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg3NmE3MDc3YTg0NTI2ZmVkMmVlYmI3MWY4MTI2NmI4ZmE3YzRiZmYwYjY5Yjg3JxvziA==: 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: ]] 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMzk4NzM2NzNkNDY1YzlhMWY4MzAwMDViODI2MzkzM2IxZDJkNmRhNjU4NzFl8dhOnQ==: 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.369 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.629 nvme0n1 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDkwN2Q4YmViMTVjNmQyMmVmMWI3MzY0ZjkzMDI0MzMZ4iDh: 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: ]] 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjZjMzhjYWQ4ZjAwYmQ1NGY1N2M1ZWVjMmI2ZTQ1ZjTQ0H63: 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.629 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.630 request: 00:36:39.630 { 00:36:39.630 "name": "nvme0", 00:36:39.630 "dhchap_key": "key2", 00:36:39.630 "dhchap_ctrlr_key": "ckey1", 00:36:39.630 "method": "bdev_nvme_set_keys", 00:36:39.630 "req_id": 1 00:36:39.630 } 00:36:39.630 Got JSON-RPC error response 00:36:39.630 response: 00:36:39.630 { 00:36:39.630 "code": -13, 00:36:39.630 "message": "Permission denied" 00:36:39.630 } 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:39.630 05:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:40.569 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:40.569 rmmod nvme_tcp 00:36:40.828 rmmod nvme_fabrics 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1793574 ']' 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1793574 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1793574 ']' 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1793574 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.828 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793574 00:36:40.829 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.829 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.829 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793574' 00:36:40.829 killing process with pid 1793574 00:36:40.829 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1793574 00:36:40.829 05:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1793574 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.398 05:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:43.308 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:43.568 05:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:46.873 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:46.873 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.135 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:47.395 05:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RRy /tmp/spdk.key-null.ose /tmp/spdk.key-sha256.RUi /tmp/spdk.key-sha384.ext /tmp/spdk.key-sha512.nmg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:47.395 05:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:51.596 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:51.596 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:51.596 00:36:51.596 real 1m1.396s 00:36:51.596 user 0m54.952s 00:36:51.596 sys 0m16.289s 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.596 ************************************ 00:36:51.596 END TEST nvmf_auth_host 00:36:51.596 ************************************ 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.596 ************************************ 00:36:51.596 START TEST nvmf_digest 00:36:51.596 ************************************ 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:51.596 * Looking for test storage... 00:36:51.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:51.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.596 --rc genhtml_branch_coverage=1 00:36:51.596 --rc genhtml_function_coverage=1 00:36:51.596 --rc genhtml_legend=1 00:36:51.596 --rc geninfo_all_blocks=1 00:36:51.596 --rc geninfo_unexecuted_blocks=1 00:36:51.596 00:36:51.596 ' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:51.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.596 --rc genhtml_branch_coverage=1 00:36:51.596 --rc genhtml_function_coverage=1 00:36:51.596 --rc genhtml_legend=1 00:36:51.596 --rc geninfo_all_blocks=1 00:36:51.596 --rc geninfo_unexecuted_blocks=1 00:36:51.596 00:36:51.596 ' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:51.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.596 --rc genhtml_branch_coverage=1 00:36:51.596 --rc genhtml_function_coverage=1 00:36:51.596 --rc genhtml_legend=1 00:36:51.596 --rc geninfo_all_blocks=1 00:36:51.596 --rc geninfo_unexecuted_blocks=1 00:36:51.596 00:36:51.596 ' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:51.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.596 --rc genhtml_branch_coverage=1 00:36:51.596 --rc genhtml_function_coverage=1 00:36:51.596 --rc genhtml_legend=1 00:36:51.596 --rc geninfo_all_blocks=1 00:36:51.596 --rc geninfo_unexecuted_blocks=1 00:36:51.596 00:36:51.596 ' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.596 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:51.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.597 05:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:59.734 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:59.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:59.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:59.735 Found net devices under 0000:31:00.0: cvl_0_0 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:59.735 Found net devices under 0000:31:00.1: cvl_0_1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:59.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:59.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:36:59.735 00:36:59.735 --- 10.0.0.2 ping statistics --- 00:36:59.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.735 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:59.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:59.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:36:59.735 00:36:59.735 --- 10.0.0.1 ping statistics --- 00:36:59.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.735 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.735 05:30:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.735 ************************************ 00:36:59.735 START TEST nvmf_digest_clean 00:36:59.735 ************************************ 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:59.735 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1811028 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1811028 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1811028 ']' 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.736 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.736 [2024-12-09 05:30:13.173694] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:59.736 [2024-12-09 05:30:13.173853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.736 [2024-12-09 05:30:13.338508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.736 [2024-12-09 05:30:13.465049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.736 [2024-12-09 05:30:13.465114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.736 [2024-12-09 05:30:13.465127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.736 [2024-12-09 05:30:13.465142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.736 [2024-12-09 05:30:13.465155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.736 [2024-12-09 05:30:13.466671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.997 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.997 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:59.997 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:59.997 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.997 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.258 05:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:00.519 null0 00:37:00.520 [2024-12-09 05:30:14.296530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.520 [2024-12-09 05:30:14.320910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1811374 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1811374 /var/tmp/bperf.sock 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1811374 ']' 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:00.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:00.520 05:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:00.520 [2024-12-09 05:30:14.420570] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:00.520 [2024-12-09 05:30:14.420700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811374 ] 00:37:00.781 [2024-12-09 05:30:14.579144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.781 [2024-12-09 05:30:14.702122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.353 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.353 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:01.353 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:01.353 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:01.353 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:01.924 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:01.924 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:01.924 nvme0n1 00:37:02.185 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:02.185 05:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:02.185 Running I/O for 2 seconds... 00:37:04.077 16134.00 IOPS, 63.02 MiB/s [2024-12-09T04:30:18.074Z] 17248.50 IOPS, 67.38 MiB/s 00:37:04.077 Latency(us) 00:37:04.077 [2024-12-09T04:30:18.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.077 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:04.077 nvme0n1 : 2.00 17282.21 67.51 0.00 0.00 7398.84 3167.57 26760.53 00:37:04.077 [2024-12-09T04:30:18.074Z] =================================================================================================================== 00:37:04.077 [2024-12-09T04:30:18.074Z] Total : 17282.21 67.51 0.00 0.00 7398.84 3167.57 26760.53 00:37:04.077 { 00:37:04.077 "results": [ 00:37:04.077 { 00:37:04.077 "job": "nvme0n1", 00:37:04.077 "core_mask": "0x2", 00:37:04.077 "workload": "randread", 00:37:04.077 "status": "finished", 00:37:04.077 "queue_depth": 128, 00:37:04.077 "io_size": 4096, 00:37:04.077 "runtime": 2.003505, 00:37:04.077 "iops": 17282.21292185445, 00:37:04.077 "mibps": 67.50864422599395, 00:37:04.077 "io_failed": 0, 00:37:04.077 "io_timeout": 0, 00:37:04.077 "avg_latency_us": 7398.840899157642, 00:37:04.077 "min_latency_us": 3167.5733333333333, 00:37:04.077 "max_latency_us": 26760.533333333333 00:37:04.077 } 00:37:04.077 ], 00:37:04.077 "core_count": 1 00:37:04.077 } 00:37:04.077 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:04.077 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:04.077 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:04.077 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:04.077 | select(.opcode=="crc32c") 00:37:04.077 | "\(.module_name) \(.executed)"' 00:37:04.077 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1811374 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1811374 ']' 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1811374 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811374 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811374' 00:37:04.337 killing process with pid 1811374 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1811374 00:37:04.337 Received shutdown signal, test time was about 2.000000 seconds 00:37:04.337 00:37:04.337 Latency(us) 00:37:04.337 [2024-12-09T04:30:18.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.337 [2024-12-09T04:30:18.334Z] =================================================================================================================== 00:37:04.337 [2024-12-09T04:30:18.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:04.337 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1811374 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1812067 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1812067 /var/tmp/bperf.sock 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1812067 ']' 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.906 05:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:04.906 [2024-12-09 05:30:18.841702] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:04.906 [2024-12-09 05:30:18.841809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812067 ] 00:37:04.906 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:04.906 Zero copy mechanism will not be used. 00:37:05.165 [2024-12-09 05:30:18.973333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.165 [2024-12-09 05:30:19.049153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.734 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.734 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:05.734 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:05.734 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:05.734 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:05.994 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.994 05:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:06.564 nvme0n1 00:37:06.564 05:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:06.564 05:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:06.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:06.565 Zero copy mechanism will not be used. 00:37:06.565 Running I/O for 2 seconds... 00:37:08.892 3207.00 IOPS, 400.88 MiB/s [2024-12-09T04:30:22.889Z] 3182.50 IOPS, 397.81 MiB/s 00:37:08.892 Latency(us) 00:37:08.892 [2024-12-09T04:30:22.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.892 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:08.892 nvme0n1 : 2.00 3187.29 398.41 0.00 0.00 5016.77 1051.31 11414.19 00:37:08.892 [2024-12-09T04:30:22.889Z] =================================================================================================================== 00:37:08.892 [2024-12-09T04:30:22.889Z] Total : 3187.29 398.41 0.00 0.00 5016.77 1051.31 11414.19 00:37:08.892 { 00:37:08.892 "results": [ 00:37:08.892 { 00:37:08.892 "job": "nvme0n1", 00:37:08.892 "core_mask": "0x2", 00:37:08.892 "workload": "randread", 00:37:08.892 "status": "finished", 00:37:08.892 "queue_depth": 16, 00:37:08.892 "io_size": 131072, 00:37:08.892 "runtime": 2.002014, 00:37:08.892 "iops": 3187.290398568641, 00:37:08.892 "mibps": 398.41129982108015, 00:37:08.892 "io_failed": 0, 00:37:08.892 "io_timeout": 0, 00:37:08.892 "avg_latency_us": 5016.771406780547, 00:37:08.892 "min_latency_us": 1051.3066666666666, 00:37:08.892 "max_latency_us": 11414.186666666666 00:37:08.892 } 00:37:08.892 ], 00:37:08.892 "core_count": 1 00:37:08.892 } 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:08.892 | select(.opcode=="crc32c") 00:37:08.892 | "\(.module_name) \(.executed)"' 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1812067 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1812067 ']' 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1812067 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812067 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812067' 00:37:08.892 killing process with pid 1812067 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1812067 00:37:08.892 Received shutdown signal, test time was about 2.000000 seconds 00:37:08.892 00:37:08.892 Latency(us) 00:37:08.892 [2024-12-09T04:30:22.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.892 [2024-12-09T04:30:22.889Z] =================================================================================================================== 00:37:08.892 [2024-12-09T04:30:22.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:08.892 05:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1812067 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1813058 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1813058 /var/tmp/bperf.sock 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1813058 ']' 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:09.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.463 05:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:09.463 [2024-12-09 05:30:23.300767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:09.463 [2024-12-09 05:30:23.300877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813058 ] 00:37:09.463 [2024-12-09 05:30:23.431980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.725 [2024-12-09 05:30:23.506256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.295 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.295 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:10.295 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:10.295 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:10.295 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:10.556 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:10.556 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:11.126 nvme0n1 00:37:11.126 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:11.126 05:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:11.126 Running I/O for 2 seconds... 00:37:13.007 27404.00 IOPS, 107.05 MiB/s [2024-12-09T04:30:27.004Z] 27533.00 IOPS, 107.55 MiB/s 00:37:13.007 Latency(us) 00:37:13.007 [2024-12-09T04:30:27.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.007 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.007 nvme0n1 : 2.01 27536.39 107.56 0.00 0.00 4642.24 2389.33 13762.56 00:37:13.007 [2024-12-09T04:30:27.004Z] =================================================================================================================== 00:37:13.007 [2024-12-09T04:30:27.004Z] Total : 27536.39 107.56 0.00 0.00 4642.24 2389.33 13762.56 00:37:13.007 { 00:37:13.007 "results": [ 00:37:13.007 { 00:37:13.007 "job": "nvme0n1", 00:37:13.007 "core_mask": "0x2", 00:37:13.007 "workload": "randwrite", 00:37:13.007 "status": "finished", 00:37:13.007 "queue_depth": 128, 00:37:13.007 "io_size": 4096, 00:37:13.007 "runtime": 2.006654, 00:37:13.007 "iops": 27536.386442306448, 00:37:13.007 "mibps": 107.56400954025956, 00:37:13.007 "io_failed": 0, 00:37:13.007 "io_timeout": 0, 00:37:13.007 "avg_latency_us": 4642.235629554559, 00:37:13.007 "min_latency_us": 2389.3333333333335, 00:37:13.007 "max_latency_us": 13762.56 00:37:13.007 } 00:37:13.007 ], 00:37:13.007 "core_count": 1 00:37:13.007 } 00:37:13.007 05:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:13.007 05:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:13.007 05:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:13.007 05:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:13.007 | select(.opcode=="crc32c") 00:37:13.007 | "\(.module_name) \(.executed)"' 00:37:13.007 05:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1813058 00:37:13.267 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1813058 ']' 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1813058 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813058 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813058' 00:37:13.268 killing process with pid 1813058 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1813058 00:37:13.268 Received shutdown signal, test time was about 2.000000 seconds 00:37:13.268 00:37:13.268 Latency(us) 00:37:13.268 [2024-12-09T04:30:27.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.268 [2024-12-09T04:30:27.265Z] =================================================================================================================== 00:37:13.268 [2024-12-09T04:30:27.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.268 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1813058 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1813766 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1813766 /var/tmp/bperf.sock 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1813766 ']' 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:13.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:13.838 05:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:13.838 [2024-12-09 05:30:27.744056] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:13.838 [2024-12-09 05:30:27.744163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813766 ] 00:37:13.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:13.838 Zero copy mechanism will not be used. 00:37:14.100 [2024-12-09 05:30:27.876727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.100 [2024-12-09 05:30:27.951370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.670 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.670 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:14.670 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:14.670 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:14.670 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:14.931 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:14.931 05:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:15.502 nvme0n1 00:37:15.502 05:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:15.502 05:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:15.502 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:15.502 Zero copy mechanism will not be used. 00:37:15.502 Running I/O for 2 seconds... 00:37:17.482 8030.00 IOPS, 1003.75 MiB/s [2024-12-09T04:30:31.479Z] 7249.00 IOPS, 906.12 MiB/s 00:37:17.482 Latency(us) 00:37:17.482 [2024-12-09T04:30:31.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.482 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:17.482 nvme0n1 : 2.00 7245.58 905.70 0.00 0.00 2202.56 1126.40 9502.72 00:37:17.482 [2024-12-09T04:30:31.479Z] =================================================================================================================== 00:37:17.482 [2024-12-09T04:30:31.479Z] Total : 7245.58 905.70 0.00 0.00 2202.56 1126.40 9502.72 00:37:17.482 { 00:37:17.482 "results": [ 00:37:17.482 { 00:37:17.482 "job": "nvme0n1", 00:37:17.482 "core_mask": "0x2", 00:37:17.482 "workload": "randwrite", 00:37:17.482 "status": "finished", 00:37:17.482 "queue_depth": 16, 00:37:17.482 "io_size": 131072, 00:37:17.482 "runtime": 2.003153, 00:37:17.482 "iops": 7245.5773473119625, 00:37:17.482 "mibps": 905.6971684139953, 00:37:17.482 "io_failed": 0, 00:37:17.482 "io_timeout": 0, 00:37:17.482 "avg_latency_us": 2202.5597501263146, 00:37:17.482 "min_latency_us": 1126.4, 00:37:17.482 "max_latency_us": 9502.72 00:37:17.482 } 00:37:17.482 ], 00:37:17.482 "core_count": 1 00:37:17.482 } 00:37:17.482 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:17.482 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:17.482 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:17.482 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:17.482 | select(.opcode=="crc32c") 00:37:17.482 | "\(.module_name) \(.executed)"' 00:37:17.482 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1813766 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1813766 ']' 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1813766 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813766 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813766' 00:37:17.743 killing process with pid 1813766 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1813766 00:37:17.743 Received shutdown signal, test time was about 2.000000 seconds 00:37:17.743 00:37:17.743 Latency(us) 00:37:17.743 [2024-12-09T04:30:31.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.743 [2024-12-09T04:30:31.740Z] =================================================================================================================== 00:37:17.743 [2024-12-09T04:30:31.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.743 05:30:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1813766 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1811028 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1811028 ']' 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1811028 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811028 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811028' 00:37:18.316 killing process with pid 1811028 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1811028 00:37:18.316 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1811028 00:37:18.887 00:37:18.887 real 0m19.694s 00:37:18.887 user 0m37.462s 00:37:18.887 sys 0m4.336s 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:18.887 ************************************ 00:37:18.887 END TEST nvmf_digest_clean 00:37:18.887 ************************************ 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:18.887 ************************************ 00:37:18.887 START TEST nvmf_digest_error 00:37:18.887 ************************************ 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1814804 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1814804 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1814804 ']' 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.887 05:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.149 [2024-12-09 05:30:32.938359] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:19.149 [2024-12-09 05:30:32.938490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.149 [2024-12-09 05:30:33.090981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.410 [2024-12-09 05:30:33.172799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.410 [2024-12-09 05:30:33.172842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.410 [2024-12-09 05:30:33.172850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.410 [2024-12-09 05:30:33.172859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.410 [2024-12-09 05:30:33.172871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.410 [2024-12-09 05:30:33.173789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.981 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.981 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:19.981 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:19.981 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.981 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.982 [2024-12-09 05:30:33.743603] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.982 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.982 null0 00:37:19.982 [2024-12-09 05:30:33.953520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.243 [2024-12-09 05:30:33.977745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1815108 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1815108 /var/tmp/bperf.sock 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1815108 ']' 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:20.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.243 05:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:20.243 [2024-12-09 05:30:34.061261] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:20.243 [2024-12-09 05:30:34.061368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815108 ] 00:37:20.243 [2024-12-09 05:30:34.194194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.503 [2024-12-09 05:30:34.268854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.076 05:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.646 nvme0n1 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:21.646 05:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:21.646 Running I/O for 2 seconds... 00:37:21.646 [2024-12-09 05:30:35.503114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.503156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.503169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.516096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.516124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.516135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.526989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.527017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.527027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.535966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.535990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.535999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.546173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.546196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.546205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.556427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.556450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.556459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.568248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.568272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.568281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.577185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.577217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.588863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.588887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.588896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.601053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.601076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.646 [2024-12-09 05:30:35.601085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.646 [2024-12-09 05:30:35.610603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.646 [2024-12-09 05:30:35.610624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.647 [2024-12-09 05:30:35.610633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.647 [2024-12-09 05:30:35.621448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.647 [2024-12-09 05:30:35.621471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.647 [2024-12-09 05:30:35.621480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.647 [2024-12-09 05:30:35.631091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.647 [2024-12-09 05:30:35.631114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.647 [2024-12-09 05:30:35.631123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.640317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.640340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.640349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.650354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.650377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.650386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.660843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.660866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.660875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.670256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.670279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.670288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.680013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.680035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.680044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.690007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.690030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.690040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.699784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.699806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.699824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.710318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.710340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.710349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.718987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.907 [2024-12-09 05:30:35.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.907 [2024-12-09 05:30:35.729509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.907 [2024-12-09 05:30:35.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.729540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.738855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.738877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.738886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.748434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.748456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.748465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.758605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.758628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.758637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.770055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.770078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.770086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.782425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.782448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.782457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.795425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.795448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.803353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.803375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.803384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.815433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.815455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.815464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.826005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.826028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.826037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.835526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.835548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.835557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.844838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.844859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.844868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.855194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.855216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.855225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.865248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.865270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.865279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.874253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.874275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.874288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.884801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.884829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.884838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.908 [2024-12-09 05:30:35.895052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:21.908 [2024-12-09 05:30:35.895074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.908 [2024-12-09 05:30:35.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.905488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.905510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.914476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.914497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.914506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.926495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.926517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.926526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.936497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.936518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.936527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.945653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.956319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.956342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.956351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.966094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.966116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.975437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.975468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.986604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.169 [2024-12-09 05:30:35.986626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.169 [2024-12-09 05:30:35.986635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.169 [2024-12-09 05:30:35.997360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:35.997382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:35.997391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.006846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.006868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.006877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.017465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.017497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.026615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.026637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.026646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.037849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.037872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.037880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.047117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.047151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.056664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.056686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.056695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.066236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.066267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.077263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.077294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.085620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.085642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.085651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.096966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.096989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.096998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.106477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.106508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.115825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.115846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.115855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.126566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.126589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.126597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.135324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.135346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.135355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.145277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.145299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.145308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.170 [2024-12-09 05:30:36.155299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.170 [2024-12-09 05:30:36.155321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.170 [2024-12-09 05:30:36.155329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.165073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.165096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.165104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.176362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.176384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.176394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.185138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.185160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.185169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.198295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.198318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.198327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.209961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.209983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.209992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.220640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.220663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.220675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.231246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.231268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.231277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.242144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.242166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.242175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.251375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.251398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.251408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.261333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.261354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.261363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.270343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.270366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.270375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.280597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.280619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.280628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.290203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.290226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.290234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.300629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.300651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.300666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.309809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.432 [2024-12-09 05:30:36.309837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.432 [2024-12-09 05:30:36.309846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.432 [2024-12-09 05:30:36.319417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.319439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.319448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.330112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.330134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.340206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.340228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.340236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.349099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.349121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.349130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.360178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.360213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.370832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.370854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.370863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.380351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.380373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.380382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.390111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.390134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.390146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.401807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.401833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.401842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.411994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.412016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.412025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.433 [2024-12-09 05:30:36.420703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.433 [2024-12-09 05:30:36.420725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.433 [2024-12-09 05:30:36.420733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.431633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.431655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.431664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.442073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.442095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.442104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.451769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.451791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.451800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.461011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.461032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.461041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.470795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.470822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.470832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.480530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.480561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 24873.00 IOPS, 97.16 MiB/s [2024-12-09T04:30:36.691Z] [2024-12-09 05:30:36.490062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.694 [2024-12-09 05:30:36.490092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.694 [2024-12-09 05:30:36.501117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.694 [2024-12-09 05:30:36.501139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.501148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.511053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.511074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.511083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.520329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.520352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.520361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.530086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.530108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.530118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.540567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.540589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.540598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.549676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.549698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.549708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.561164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.561186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.561198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.570254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.570275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.570284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.582373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.582395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.582404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.592538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.592559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.592568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.602734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.602755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.602764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.612150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.612172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.612181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.622435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.622457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.622465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.631782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.631804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.631813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.642162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.642184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.642193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.652178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.652200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.652209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.661779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.661801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.661810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.672851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.672873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.672882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.695 [2024-12-09 05:30:36.683418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.695 [2024-12-09 05:30:36.683440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.695 [2024-12-09 05:30:36.683449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.692157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.692181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.956 [2024-12-09 05:30:36.692190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.703401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.703422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.956 [2024-12-09 05:30:36.703431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.713177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.713199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.956 [2024-12-09 05:30:36.713207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.722974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.722996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.956 [2024-12-09 05:30:36.723005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.732871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.732916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.956 [2024-12-09 05:30:36.732928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.956 [2024-12-09 05:30:36.743176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.956 [2024-12-09 05:30:36.743197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.743207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.751332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.751353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.751362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.763101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.763122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.763131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.774347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.774368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.784561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.784583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.784592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.794466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.794488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.794497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.803675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.803697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.803707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.812846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.812869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.812878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.823679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.823701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.823710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.834950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.834973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.834982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.846000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.846022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.846031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.854465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.854488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.854497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.865951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.865972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.865981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.876078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.876100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.876109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.886635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.886658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.886667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.896603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.896624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.896633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.907338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.907372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.915909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.915931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.915940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.926090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.926112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.926121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.936400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.936422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.936431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.957 [2024-12-09 05:30:36.946909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:22.957 [2024-12-09 05:30:36.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.957 [2024-12-09 05:30:36.946940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:36.956914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:36.956937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:36.956946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:36.967141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:36.967163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:36.967172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:36.975809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:36.975836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:36.975845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:36.986530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:36.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:36.986561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:36.997000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:36.997022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:36.997031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.009116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.009138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.009147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.020923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.020946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.020955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.030384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.030407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.030415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.040172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.040195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.040204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.050314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.050337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.050346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.059891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.059913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.059922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.070581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.070604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.070613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.079544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.079566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.079579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.089621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.089644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.089653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.099783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.099814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.110466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.110488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.110497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.119433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.119455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.119464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.128690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.128713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.128722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.139445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.139467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.139476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.148781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.148803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.148812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.159439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.159462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.219 [2024-12-09 05:30:37.159470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.219 [2024-12-09 05:30:37.168552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.219 [2024-12-09 05:30:37.168578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.220 [2024-12-09 05:30:37.168587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.220 [2024-12-09 05:30:37.178810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.220 [2024-12-09 05:30:37.178836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.220 [2024-12-09 05:30:37.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.220 [2024-12-09 05:30:37.188974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.220 [2024-12-09 05:30:37.188996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.220 [2024-12-09 05:30:37.189005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.220 [2024-12-09 05:30:37.198453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.220 [2024-12-09 05:30:37.198476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.220 [2024-12-09 05:30:37.198485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.220 [2024-12-09 05:30:37.207748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.220 [2024-12-09 05:30:37.207770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.220 [2024-12-09 05:30:37.207779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.218379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.218401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.218410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.227967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.227988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.227997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.237092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.237114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.237123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.247921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.247943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.247957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.257515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.257537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.257546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.267066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.267088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.277426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.277449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.277458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.286119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.286140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.286149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.296987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.297010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.297019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.307342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.307364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.307373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.317039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.481 [2024-12-09 05:30:37.317061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.481 [2024-12-09 05:30:37.317070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.481 [2024-12-09 05:30:37.326999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.327019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.327028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.336854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.336879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.336888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.345762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.345785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.345793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.356179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.356210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.365789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.365810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.365825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.376024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.376046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.376055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.385994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.386016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.386025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.397010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.397032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.406602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.406624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.406633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.416962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.416985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.416997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.427050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.427072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.427081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.435632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.435654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.435663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.445137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.445160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.445169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.455840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.455862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.482 [2024-12-09 05:30:37.464927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.482 [2024-12-09 05:30:37.464949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.482 [2024-12-09 05:30:37.464958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.743 [2024-12-09 05:30:37.475619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.743 [2024-12-09 05:30:37.475641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.743 [2024-12-09 05:30:37.475650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.743 [2024-12-09 05:30:37.486125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:23.743 [2024-12-09 05:30:37.486148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.743 [2024-12-09 05:30:37.486158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.743 25171.50 IOPS, 98.33 MiB/s 00:37:23.743 Latency(us) 00:37:23.743 [2024-12-09T04:30:37.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:23.743 nvme0n1 : 2.00 25176.21 98.34 0.00 0.00 5079.43 2539.52 15947.09 00:37:23.743 [2024-12-09T04:30:37.740Z] =================================================================================================================== 00:37:23.743 [2024-12-09T04:30:37.740Z] Total : 25176.21 98.34 0.00 0.00 5079.43 2539.52 15947.09 00:37:23.743 { 00:37:23.743 "results": [ 00:37:23.743 { 00:37:23.743 "job": "nvme0n1", 00:37:23.743 "core_mask": "0x2", 00:37:23.743 "workload": "randread", 00:37:23.743 "status": "finished", 00:37:23.743 "queue_depth": 128, 00:37:23.743 "io_size": 4096, 00:37:23.743 "runtime": 2.00471, 00:37:23.743 "iops": 25176.210025390206, 00:37:23.743 "mibps": 98.3445704116805, 00:37:23.743 "io_failed": 0, 00:37:23.743 "io_timeout": 0, 00:37:23.743 "avg_latency_us": 5079.428058621123, 00:37:23.743 "min_latency_us": 2539.52, 00:37:23.743 "max_latency_us": 15947.093333333334 00:37:23.743 } 00:37:23.743 ], 00:37:23.743 "core_count": 1 00:37:23.743 } 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:23.743 | .driver_specific 00:37:23.743 | .nvme_error 00:37:23.743 | .status_code 00:37:23.743 | .command_transient_transport_error' 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1815108 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1815108 ']' 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1815108 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:23.743 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815108 00:37:24.005 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:24.005 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:24.005 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815108' 00:37:24.005 killing process with pid 1815108 00:37:24.005 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1815108 00:37:24.005 Received shutdown signal, test time was about 2.000000 seconds 00:37:24.005 00:37:24.005 Latency(us) 00:37:24.005 [2024-12-09T04:30:38.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.005 [2024-12-09T04:30:38.002Z] =================================================================================================================== 00:37:24.005 [2024-12-09T04:30:38.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:24.005 05:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1815108 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1815840 00:37:24.266 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1815840 /var/tmp/bperf.sock 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1815840 ']' 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:24.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.267 05:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.527 [2024-12-09 05:30:38.295269] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:24.527 [2024-12-09 05:30:38.295378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815840 ] 00:37:24.527 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:24.527 Zero copy mechanism will not be used. 00:37:24.527 [2024-12-09 05:30:38.426341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.527 [2024-12-09 05:30:38.501239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.098 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.098 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:25.098 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.098 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.359 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.620 nvme0n1 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:25.621 05:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:25.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:25.621 Zero copy mechanism will not be used. 00:37:25.621 Running I/O for 2 seconds... 00:37:25.621 [2024-12-09 05:30:39.597677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.621 [2024-12-09 05:30:39.597723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.621 [2024-12-09 05:30:39.597737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.621 [2024-12-09 05:30:39.607053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.621 [2024-12-09 05:30:39.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.621 [2024-12-09 05:30:39.607095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.621 [2024-12-09 05:30:39.611757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.621 [2024-12-09 05:30:39.611782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.621 [2024-12-09 05:30:39.611791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.617777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.617802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.617812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.626418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.626442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.626452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.633687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.633710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.633719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.643482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.643505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.643515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.649961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.649984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.649993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.657605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.657627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.657641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.881 [2024-12-09 05:30:39.663634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.881 [2024-12-09 05:30:39.663658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.881 [2024-12-09 05:30:39.663666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.674916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.674938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.674948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.685060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.685083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.685092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.688967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.688990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.688999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.693685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.693708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.693716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.700927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.700950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.700958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.711672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.711696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.711705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.722367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.722390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.722399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.733107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.733134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.733143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.741916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.741939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.741948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.749565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.749588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.749597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.754459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.754481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.754490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.757507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.757529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.757538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.761944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.761966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.761975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.766535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.766558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.766567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.772064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.772088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.772096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.780629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.780652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.780661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.788937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.788959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.788969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.795683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.795705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.795715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.804893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.804916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.804925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.814858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.814881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.814890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.824745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.824769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.835630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.835654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.835663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.846427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.846450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.846459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.854224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.854256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.861797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.861829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.861838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.882 [2024-12-09 05:30:39.872330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:25.882 [2024-12-09 05:30:39.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.882 [2024-12-09 05:30:39.872362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.877933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.877956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.877966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.887534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.887558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.887567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.892163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.892185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.892194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.901700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.901732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.910196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.910219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.910229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.918242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.918265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.918274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.923633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.923655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.928267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.928290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.928299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.932805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.932833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.932842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.937667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.937690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.937698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.143 [2024-12-09 05:30:39.944602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.143 [2024-12-09 05:30:39.944623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.143 [2024-12-09 05:30:39.944632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:39.955013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:39.955035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:39.955044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:39.963844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:39.963867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:39.963875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:39.973335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:39.973357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:39.973366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:39.981641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:39.981664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:39.981673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:39.992196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:39.992222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:39.992231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.003100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.003124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.003134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.009652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.009684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.017429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.017453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.017462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.028022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.028046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.028056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.034020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.034043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.034052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.038702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.038724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.038734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.047132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.047155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.047164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.052580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.052603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.052614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.062524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.062547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.062556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.070493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.070516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.070525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.077031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.077054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.077063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.086072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.086106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.094050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.094073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.104898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.104921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.112802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.112829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-12-09 05:30:40.112838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.144 [2024-12-09 05:30:40.118303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.144 [2024-12-09 05:30:40.118326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-12-09 05:30:40.118335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.145 [2024-12-09 05:30:40.124775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.145 [2024-12-09 05:30:40.124798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-12-09 05:30:40.124811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.145 [2024-12-09 05:30:40.132743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.145 [2024-12-09 05:30:40.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-12-09 05:30:40.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.137062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.137084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.137093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.143473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.143496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.148808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.148836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.153976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.153997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.154006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.163746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.163770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.163780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.170865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.170887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.170896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.175102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.175125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.186246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.186270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.186279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.192921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.192944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.192960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.202450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.202481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.208610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.208632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.208641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.216546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.216569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.227253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.227275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.227284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.236903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.236925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.236934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.243597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.243620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.243628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.254828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.254851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.254863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.260421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.265584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.405 [2024-12-09 05:30:40.265606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.405 [2024-12-09 05:30:40.265615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.405 [2024-12-09 05:30:40.270230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.270252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.270261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.274984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.275007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.275015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.281005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.281028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.281037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.291476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.291499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.291508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.296878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.296901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.296910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.307863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.307885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.307894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.316998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.317021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.317031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.326050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.326073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.326082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.335951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.335982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.335992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.344377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.344400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.344410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.354614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.354638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.361874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.361897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.361905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.367758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.367780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.367789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.374058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.374080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.374089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.381499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.381521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.381533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.390800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.390827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.390836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.406 [2024-12-09 05:30:40.398061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.406 [2024-12-09 05:30:40.398085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.406 [2024-12-09 05:30:40.398093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.403752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.403774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.403783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.410835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.410858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.410867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.419090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.419113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.419122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.428305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.428328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.428337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.432829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.432850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.432859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.440087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.440109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.440118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.445141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.445167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.445176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.452067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.452099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.463054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.463078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.463087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.471622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.471645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.481295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.481318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.481327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.488296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.488319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.488329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.499003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.499027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.499036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.508296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.508319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.508329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.519197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.667 [2024-12-09 05:30:40.519221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.667 [2024-12-09 05:30:40.519233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.667 [2024-12-09 05:30:40.530006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.530040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.539464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.539495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.539504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.548620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.548643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.548651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.559437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.559461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.559470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.569862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.569885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.569894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.580671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.580694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.580703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.590993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.591016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.668 3917.00 IOPS, 489.62 MiB/s [2024-12-09T04:30:40.665Z] [2024-12-09 05:30:40.602393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.602417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.602426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.613994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.614017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.614027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.624202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.624226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.624234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.636107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.636139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.646999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.647023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.647032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.668 [2024-12-09 05:30:40.658752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.668 [2024-12-09 05:30:40.658776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.668 [2024-12-09 05:30:40.658785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.667571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.667595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.667604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.676169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.676192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.676202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.685779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.685811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.697566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.697590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.697602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.709395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.709418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.709427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.720054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.720077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.730889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.730913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.730922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.741461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.741485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.753337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.753361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.753370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.763528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.763552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.763561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.774040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.774064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.774073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.929 [2024-12-09 05:30:40.785033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.929 [2024-12-09 05:30:40.785056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.929 [2024-12-09 05:30:40.785065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.795411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.805488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.805519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.814557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.814581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.814590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.823697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.823729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.832363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.832387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.832396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.843129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.843153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.843162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.853688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.853712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.853721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.864038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.864061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.864070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.875103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.875127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.875139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.886263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.886286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.886295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.897016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.897039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.897048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.906871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.906894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.906904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:26.930 [2024-12-09 05:30:40.918231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:26.930 [2024-12-09 05:30:40.918253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-12-09 05:30:40.918262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.929455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.929479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.929488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.940401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.940424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.951663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.951687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.951696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.962155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.962179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.962188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.970865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.970892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.970901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.982213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.982237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.982246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:40.993296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:40.993319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:40.993328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.003605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.003628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.003637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.015052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.015075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.015084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.023720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.023744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.023753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.034443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.034465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.034474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.043201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.043225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.043233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.053413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.053435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.053448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.061415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.061437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.061446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.072736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.191 [2024-12-09 05:30:41.072759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.191 [2024-12-09 05:30:41.072769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.191 [2024-12-09 05:30:41.083456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.083479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.083488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.093051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.093076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.093085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.103306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.103328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.103337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.109366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.109388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.109397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.113732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.113755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.113764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.118126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.118149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.118157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.122645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.122672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.122681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.127315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.127338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.127347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.131804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.131832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.131841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.136540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.136563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.136571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.141133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.141156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.141165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.152208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.152240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.160542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.160565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.171135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.171158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.171167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.192 [2024-12-09 05:30:41.181323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.192 [2024-12-09 05:30:41.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-12-09 05:30:41.181355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.190912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.190936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.190945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.199545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.199568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.199577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.210800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.210829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.210838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.219245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.219270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.219279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.230503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.230537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.238665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.238689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.238698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.249687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.249710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.249719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.260284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.271054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.271080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.271090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.280650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.280673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.280681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.289883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.289905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.289914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.299548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.299570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.299579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.310311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.310334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.310344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.320702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.320725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.320734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.331448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.331471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.331480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.338028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.338051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.338060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.349241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.349264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.349272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.361357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.361380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.361389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.373444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.373467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.373496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.386175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.398859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.398882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.398892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.408563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.408587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.408596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.420280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.420303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.420312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.430780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.430804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.430812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.452 [2024-12-09 05:30:41.441868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.452 [2024-12-09 05:30:41.441891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.452 [2024-12-09 05:30:41.441901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.453262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.453289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.453298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.464214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.464237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.464246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.475622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.475646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.475655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.485559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.485583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.496064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.496088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.496097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.506529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.506552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.506561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.515689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.515712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.515722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.525163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.525186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.525195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.536706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.536729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.536738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.547532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.547555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.547564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.558799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.558827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.558836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.569488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.569511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.569520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:27.713 [2024-12-09 05:30:41.579624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.713 [2024-12-09 05:30:41.579646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.713 [2024-12-09 05:30:41.579655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.714 [2024-12-09 05:30:41.585389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.714 [2024-12-09 05:30:41.585411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.714 [2024-12-09 05:30:41.585421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:27.714 [2024-12-09 05:30:41.592743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000394200) 00:37:27.714 [2024-12-09 05:30:41.592765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.714 [2024-12-09 05:30:41.592774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:27.714 3519.00 IOPS, 439.88 MiB/s 00:37:27.714 Latency(us) 00:37:27.714 [2024-12-09T04:30:41.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:27.714 nvme0n1 : 2.00 3518.59 439.82 0.00 0.00 4544.18 679.25 13107.20 00:37:27.714 [2024-12-09T04:30:41.711Z] =================================================================================================================== 00:37:27.714 [2024-12-09T04:30:41.711Z] Total : 3518.59 439.82 0.00 0.00 4544.18 679.25 13107.20 00:37:27.714 { 00:37:27.714 "results": [ 00:37:27.714 { 00:37:27.714 "job": "nvme0n1", 00:37:27.714 "core_mask": "0x2", 00:37:27.714 "workload": "randread", 00:37:27.714 "status": "finished", 00:37:27.714 "queue_depth": 16, 00:37:27.714 "io_size": 131072, 00:37:27.714 "runtime": 2.004779, 00:37:27.714 "iops": 3518.592323642656, 00:37:27.714 "mibps": 439.824040455332, 00:37:27.714 "io_failed": 0, 00:37:27.714 "io_timeout": 0, 00:37:27.714 "avg_latency_us": 4544.184360646442, 00:37:27.714 "min_latency_us": 679.2533333333333, 00:37:27.714 "max_latency_us": 13107.2 00:37:27.714 } 00:37:27.714 ], 00:37:27.714 "core_count": 1 00:37:27.714 } 00:37:27.714 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:27.714 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:27.714 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:27.714 | .driver_specific 00:37:27.714 | .nvme_error 00:37:27.714 | .status_code 00:37:27.714 | .command_transient_transport_error' 00:37:27.714 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1815840 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1815840 ']' 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1815840 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815840 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815840' 00:37:27.975 killing process with pid 1815840 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1815840 00:37:27.975 Received shutdown signal, test time was about 2.000000 seconds 00:37:27.975 00:37:27.975 Latency(us) 00:37:27.975 [2024-12-09T04:30:41.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.975 [2024-12-09T04:30:41.972Z] =================================================================================================================== 00:37:27.975 [2024-12-09T04:30:41.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.975 05:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1815840 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1816537 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1816537 /var/tmp/bperf.sock 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1816537 ']' 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:28.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:28.545 05:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.545 [2024-12-09 05:30:42.406886] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:28.545 [2024-12-09 05:30:42.406995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816537 ] 00:37:28.806 [2024-12-09 05:30:42.542357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.806 [2024-12-09 05:30:42.617559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.376 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.635 nvme0n1 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.635 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:29.895 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.895 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:29.895 05:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:29.895 Running I/O for 2 seconds... 00:37:29.895 [2024-12-09 05:30:43.737593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:29.895 [2024-12-09 05:30:43.738683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.738720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.747165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:29.895 [2024-12-09 05:30:43.748298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.748323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.756532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:29.895 [2024-12-09 05:30:43.757648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.757672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.765900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:29.895 [2024-12-09 05:30:43.767007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.767029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.775255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:29.895 [2024-12-09 05:30:43.776378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.776399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.784632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:29.895 [2024-12-09 05:30:43.785741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.785762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.793991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:29.895 [2024-12-09 05:30:43.795125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.795146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.803328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:29.895 [2024-12-09 05:30:43.804423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.804443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.812660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:37:29.895 [2024-12-09 05:30:43.813784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.813805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.821982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:29.895 [2024-12-09 05:30:43.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.823117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.831300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:29.895 [2024-12-09 05:30:43.832435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.832455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.840640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:37:29.895 [2024-12-09 05:30:43.841758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.841779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.849986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:29.895 [2024-12-09 05:30:43.851107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.851130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.859335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:29.895 [2024-12-09 05:30:43.860457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.860477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.868710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:29.895 [2024-12-09 05:30:43.869810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.869834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.878040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:29.895 [2024-12-09 05:30:43.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.895 [2024-12-09 05:30:43.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.895 [2024-12-09 05:30:43.887368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:30.156 [2024-12-09 05:30:43.888422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.888444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.896694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:30.156 [2024-12-09 05:30:43.897766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.906028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:30.156 [2024-12-09 05:30:43.907126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.907147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.915358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:30.156 [2024-12-09 05:30:43.916325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.916346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.925569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:30.156 [2024-12-09 05:30:43.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.926899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.935065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:30.156 [2024-12-09 05:30:43.936408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.936429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.942824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:30.156 [2024-12-09 05:30:43.943385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.943406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.953518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:30.156 [2024-12-09 05:30:43.954867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.954888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.962253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:30.156 [2024-12-09 05:30:43.963233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.963254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.971487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:30.156 [2024-12-09 05:30:43.972461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.972482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.980846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:37:30.156 [2024-12-09 05:30:43.981821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.156 [2024-12-09 05:30:43.981842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.156 [2024-12-09 05:30:43.990184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:30.157 [2024-12-09 05:30:43.991122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:43.991146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:43.999529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:30.157 [2024-12-09 05:30:44.000511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.000532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.008892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:30.157 [2024-12-09 05:30:44.009860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.009881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.018233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:30.157 [2024-12-09 05:30:44.019206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.019227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.027580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:30.157 [2024-12-09 05:30:44.028558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.028580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.036938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:37:30.157 [2024-12-09 05:30:44.037888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.037908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.046301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:30.157 [2024-12-09 05:30:44.047220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.047241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.055658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:30.157 [2024-12-09 05:30:44.056623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.056645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.065013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:30.157 [2024-12-09 05:30:44.065983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.066003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.074359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:30.157 [2024-12-09 05:30:44.075313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.075335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.083681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:30.157 [2024-12-09 05:30:44.084650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.084671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.093052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:30.157 [2024-12-09 05:30:44.093997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.094018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.102396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:30.157 [2024-12-09 05:30:44.103375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.103396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.111758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:30.157 [2024-12-09 05:30:44.112739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.112760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.121106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:30.157 [2024-12-09 05:30:44.122088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.122109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.130432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:37:30.157 [2024-12-09 05:30:44.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.131426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.140959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:37:30.157 [2024-12-09 05:30:44.142401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.157 [2024-12-09 05:30:44.142422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:30.157 [2024-12-09 05:30:44.149266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:37:30.419 [2024-12-09 05:30:44.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.149998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.158513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:30.419 [2024-12-09 05:30:44.159258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.159278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.167468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:30.419 [2024-12-09 05:30:44.168090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.168110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.178244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:30.419 [2024-12-09 05:30:44.179462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.179483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.186858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.419 [2024-12-09 05:30:44.187923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.187943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.197557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:30.419 [2024-12-09 05:30:44.198912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.198932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.205453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:30.419 [2024-12-09 05:30:44.206320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.214833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:30.419 [2024-12-09 05:30:44.215689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.215710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.224376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:30.419 [2024-12-09 05:30:44.225255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.225276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.233772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:30.419 [2024-12-09 05:30:44.234666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.234687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.243157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:30.419 [2024-12-09 05:30:44.244025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.244045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.252529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.419 [2024-12-09 05:30:44.253412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.261879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:30.419 [2024-12-09 05:30:44.262752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.262773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.271258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:30.419 [2024-12-09 05:30:44.272080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.272101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.280624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:30.419 [2024-12-09 05:30:44.281478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.281499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.290019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.419 [2024-12-09 05:30:44.290875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.290896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.299385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:30.419 [2024-12-09 05:30:44.300263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.300283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.308771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:30.419 [2024-12-09 05:30:44.309647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.309667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.318168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:30.419 [2024-12-09 05:30:44.319048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.319069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.327562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:30.419 [2024-12-09 05:30:44.328424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.328445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.336892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:30.419 [2024-12-09 05:30:44.337775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.337796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.346276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:30.419 [2024-12-09 05:30:44.347097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.347119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.355664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:30.419 [2024-12-09 05:30:44.356525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.356546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.365050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.419 [2024-12-09 05:30:44.365886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.365907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.374380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:30.419 [2024-12-09 05:30:44.375259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.375279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.383712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:30.419 [2024-12-09 05:30:44.384589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.419 [2024-12-09 05:30:44.384610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.419 [2024-12-09 05:30:44.393094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:30.420 [2024-12-09 05:30:44.393954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.420 [2024-12-09 05:30:44.393977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.420 [2024-12-09 05:30:44.402499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.420 [2024-12-09 05:30:44.403358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.420 [2024-12-09 05:30:44.403378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.420 [2024-12-09 05:30:44.411887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:30.681 [2024-12-09 05:30:44.412726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.421283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:30.681 [2024-12-09 05:30:44.422141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.422162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.430629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:30.681 [2024-12-09 05:30:44.431506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.431527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.440024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:30.681 [2024-12-09 05:30:44.440902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.440923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.449409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:30.681 [2024-12-09 05:30:44.450287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.450307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.458807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:30.681 [2024-12-09 05:30:44.459687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.459708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.468242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:30.681 [2024-12-09 05:30:44.469092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.469113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.477617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.681 [2024-12-09 05:30:44.478484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.478505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.487007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:30.681 [2024-12-09 05:30:44.487880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.487901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.496405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:30.681 [2024-12-09 05:30:44.497283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.497305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.505759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:30.681 [2024-12-09 05:30:44.506575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.506596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.515107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.681 [2024-12-09 05:30:44.515980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.516002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.524468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:30.681 [2024-12-09 05:30:44.525326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.525347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.533832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:30.681 [2024-12-09 05:30:44.534697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.534717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.543196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:30.681 [2024-12-09 05:30:44.544070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.544090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.552562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:30.681 [2024-12-09 05:30:44.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.553444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.561986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:30.681 [2024-12-09 05:30:44.562835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.562856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.571379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:30.681 [2024-12-09 05:30:44.572234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.572255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.580784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:30.681 [2024-12-09 05:30:44.581646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.581667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.590139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.681 [2024-12-09 05:30:44.591025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.591046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.599523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:30.681 [2024-12-09 05:30:44.600400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.608929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:30.681 [2024-12-09 05:30:44.609801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.609826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.618326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:30.681 [2024-12-09 05:30:44.619206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.619227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.627699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.681 [2024-12-09 05:30:44.628564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.628585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.637053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:30.681 [2024-12-09 05:30:44.637907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.637928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.646383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:30.681 [2024-12-09 05:30:44.647259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.647279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.655788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:30.681 [2024-12-09 05:30:44.656650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.681 [2024-12-09 05:30:44.665156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:30.681 [2024-12-09 05:30:44.666025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.681 [2024-12-09 05:30:44.666046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.674522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:30.943 [2024-12-09 05:30:44.675397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.683869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:30.943 [2024-12-09 05:30:44.684738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.684760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.693190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:30.943 [2024-12-09 05:30:44.694022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.694043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.702560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.943 [2024-12-09 05:30:44.703435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.703455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.711900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:30.943 [2024-12-09 05:30:44.712758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.712779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.721246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:30.943 [2024-12-09 05:30:44.722088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.722109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:30.943 27092.00 IOPS, 105.83 MiB/s [2024-12-09T04:30:44.940Z] [2024-12-09 05:30:44.730578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:30.943 [2024-12-09 05:30:44.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.731478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.739954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:30.943 [2024-12-09 05:30:44.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.740849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.749313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.943 [2024-12-09 05:30:44.750189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.750210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.758671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:30.943 [2024-12-09 05:30:44.759535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.759556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.768042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:30.943 [2024-12-09 05:30:44.768894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.768914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.777379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.943 [2024-12-09 05:30:44.778252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.778273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.786733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:30.943 [2024-12-09 05:30:44.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.787630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.796110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:30.943 [2024-12-09 05:30:44.796979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.797001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.805473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:30.943 [2024-12-09 05:30:44.806345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.806366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.814885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:30.943 [2024-12-09 05:30:44.815752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.815772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.824265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:30.943 [2024-12-09 05:30:44.825106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.825127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.833610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.943 [2024-12-09 05:30:44.834491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.834513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.843004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:30.943 [2024-12-09 05:30:44.843878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.843899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.852388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:30.943 [2024-12-09 05:30:44.853266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.853286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.861760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:30.943 [2024-12-09 05:30:44.862633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.862654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.871172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:30.943 [2024-12-09 05:30:44.872027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.872047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.880530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:30.943 [2024-12-09 05:30:44.881384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.881405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.889889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:30.943 [2024-12-09 05:30:44.890757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.890778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.899244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:30.943 [2024-12-09 05:30:44.900100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.943 [2024-12-09 05:30:44.900121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.943 [2024-12-09 05:30:44.908609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:30.944 [2024-12-09 05:30:44.909487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.944 [2024-12-09 05:30:44.909507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.944 [2024-12-09 05:30:44.918014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:30.944 [2024-12-09 05:30:44.918881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.944 [2024-12-09 05:30:44.918902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:30.944 [2024-12-09 05:30:44.927468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:30.944 [2024-12-09 05:30:44.928336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.944 [2024-12-09 05:30:44.928357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.936844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:31.205 [2024-12-09 05:30:44.937675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.937695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.946274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:31.205 [2024-12-09 05:30:44.947143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.947164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.955647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:31.205 [2024-12-09 05:30:44.956516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.956540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.964993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:31.205 [2024-12-09 05:30:44.965818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.965838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.974347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:31.205 [2024-12-09 05:30:44.975201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.975222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.983696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:31.205 [2024-12-09 05:30:44.984555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.984576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:44.993039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:31.205 [2024-12-09 05:30:44.993882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:44.993902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.002372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.205 [2024-12-09 05:30:45.003236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.003257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.011771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:31.205 [2024-12-09 05:30:45.012639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.012660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.021131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:31.205 [2024-12-09 05:30:45.021984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.022005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.030481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:31.205 [2024-12-09 05:30:45.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.031365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.039820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:31.205 [2024-12-09 05:30:45.040692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.040713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.049185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:31.205 [2024-12-09 05:30:45.050046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.050067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.058533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:31.205 [2024-12-09 05:30:45.059411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.059432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.067916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:31.205 [2024-12-09 05:30:45.068785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.068805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.077289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:31.205 [2024-12-09 05:30:45.078122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.078143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.086652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.205 [2024-12-09 05:30:45.087524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.096028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:31.205 [2024-12-09 05:30:45.096895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.096916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.105362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:31.205 [2024-12-09 05:30:45.106246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.106267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.114727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:31.205 [2024-12-09 05:30:45.115604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.115624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.124138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:31.205 [2024-12-09 05:30:45.125013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.125033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.133529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:31.205 [2024-12-09 05:30:45.134361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.134382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.142882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:31.205 [2024-12-09 05:30:45.143753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.205 [2024-12-09 05:30:45.143774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.205 [2024-12-09 05:30:45.152228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:31.206 [2024-12-09 05:30:45.153082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.206 [2024-12-09 05:30:45.153103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.206 [2024-12-09 05:30:45.161574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:31.206 [2024-12-09 05:30:45.162449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.206 [2024-12-09 05:30:45.162469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.206 [2024-12-09 05:30:45.170989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.206 [2024-12-09 05:30:45.171854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.206 [2024-12-09 05:30:45.171875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.206 [2024-12-09 05:30:45.180355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:31.206 [2024-12-09 05:30:45.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.206 [2024-12-09 05:30:45.181234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.206 [2024-12-09 05:30:45.189699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:31.206 [2024-12-09 05:30:45.190574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.206 [2024-12-09 05:30:45.190595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.199041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:31.467 [2024-12-09 05:30:45.199906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.208398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:31.467 [2024-12-09 05:30:45.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.209269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.217745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:31.467 [2024-12-09 05:30:45.218776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.218797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.227303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:31.467 [2024-12-09 05:30:45.228177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.236666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:31.467 [2024-12-09 05:30:45.237505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.237525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.246061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:31.467 [2024-12-09 05:30:45.246928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.246949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.467 [2024-12-09 05:30:45.255416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.467 [2024-12-09 05:30:45.256275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.467 [2024-12-09 05:30:45.256296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.264788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:31.468 [2024-12-09 05:30:45.265670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.265691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.274161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:31.468 [2024-12-09 05:30:45.275036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.275057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.283548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:31.468 [2024-12-09 05:30:45.284424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.284445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.292926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:31.468 [2024-12-09 05:30:45.293795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.293820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.301633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:31.468 [2024-12-09 05:30:45.302462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.302483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.313115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:31.468 [2024-12-09 05:30:45.314348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.314369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.320810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:31.468 [2024-12-09 05:30:45.321529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.330155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.468 [2024-12-09 05:30:45.330873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.330894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.339526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:31.468 [2024-12-09 05:30:45.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.340293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.348860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:31.468 [2024-12-09 05:30:45.349586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.349608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.358182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:31.468 [2024-12-09 05:30:45.358893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.358920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.367525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:31.468 [2024-12-09 05:30:45.368270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.368291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.376869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:31.468 [2024-12-09 05:30:45.377600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.377621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.386193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:31.468 [2024-12-09 05:30:45.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.386907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.395510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:31.468 [2024-12-09 05:30:45.396230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.396252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.404842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:31.468 [2024-12-09 05:30:45.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.405596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.414170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:31.468 [2024-12-09 05:30:45.414884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.414905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.423508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:31.468 [2024-12-09 05:30:45.424244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.424265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.432850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:31.468 [2024-12-09 05:30:45.433555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.442177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:31.468 [2024-12-09 05:30:45.442898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.442919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.468 [2024-12-09 05:30:45.451506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:31.468 [2024-12-09 05:30:45.452241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.468 [2024-12-09 05:30:45.452262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.460835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:31.730 [2024-12-09 05:30:45.461549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.461570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.470174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:31.730 [2024-12-09 05:30:45.470881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.470901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.479500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:37:31.730 [2024-12-09 05:30:45.480224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.480244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.488837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:31.730 [2024-12-09 05:30:45.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.489580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.498164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:31.730 [2024-12-09 05:30:45.498900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.507479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:31.730 [2024-12-09 05:30:45.508197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.516819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:31.730 [2024-12-09 05:30:45.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.517583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.526174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:31.730 [2024-12-09 05:30:45.526891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.526911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.535508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:31.730 [2024-12-09 05:30:45.536256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.536277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.544849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:31.730 [2024-12-09 05:30:45.545587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.545609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.554163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:31.730 [2024-12-09 05:30:45.554884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.554904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.563502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:31.730 [2024-12-09 05:30:45.564243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.564264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.572843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:31.730 [2024-12-09 05:30:45.573579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.573600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.582197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:31.730 [2024-12-09 05:30:45.582880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.582901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.591525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:31.730 [2024-12-09 05:30:45.592262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.592282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.600846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:31.730 [2024-12-09 05:30:45.601574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.601593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.610179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:31.730 [2024-12-09 05:30:45.610906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.610928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.619529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:31.730 [2024-12-09 05:30:45.620271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.620292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.628881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:31.730 [2024-12-09 05:30:45.629611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.629631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.638200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:31.730 [2024-12-09 05:30:45.638902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.647533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:31.730 [2024-12-09 05:30:45.648273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.648294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.656848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:31.730 [2024-12-09 05:30:45.657566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.657586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.666174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:31.730 [2024-12-09 05:30:45.666888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.730 [2024-12-09 05:30:45.666908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.730 [2024-12-09 05:30:45.675503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:31.730 [2024-12-09 05:30:45.676231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.731 [2024-12-09 05:30:45.676251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.731 [2024-12-09 05:30:45.684843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:31.731 [2024-12-09 05:30:45.685568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.731 [2024-12-09 05:30:45.685589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.731 [2024-12-09 05:30:45.694178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:31.731 [2024-12-09 05:30:45.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.731 [2024-12-09 05:30:45.694916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.731 [2024-12-09 05:30:45.703503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:31.731 [2024-12-09 05:30:45.704252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.731 [2024-12-09 05:30:45.704273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.731 [2024-12-09 05:30:45.712813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:31.731 [2024-12-09 05:30:45.713562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.731 [2024-12-09 05:30:45.713583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.731 [2024-12-09 05:30:45.722162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:31.991 [2024-12-09 05:30:45.722899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:31.991 [2024-12-09 05:30:45.722919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:31.991 27214.00 IOPS, 106.30 MiB/s 00:37:31.991 Latency(us) 00:37:31.991 [2024-12-09T04:30:45.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.991 nvme0n1 : 2.00 27208.53 106.28 0.00 0.00 4697.90 2348.37 11741.87 00:37:31.991 [2024-12-09T04:30:45.988Z] =================================================================================================================== 00:37:31.991 [2024-12-09T04:30:45.988Z] Total : 27208.53 106.28 0.00 0.00 4697.90 2348.37 11741.87 00:37:31.991 { 00:37:31.991 "results": [ 00:37:31.991 { 00:37:31.991 "job": "nvme0n1", 00:37:31.991 "core_mask": "0x2", 00:37:31.991 "workload": "randwrite", 00:37:31.991 "status": "finished", 00:37:31.991 "queue_depth": 128, 00:37:31.991 "io_size": 4096, 00:37:31.991 "runtime": 2.002791, 00:37:31.991 "iops": 27208.53049569326, 00:37:31.991 "mibps": 106.2833222488018, 00:37:31.991 "io_failed": 0, 00:37:31.991 "io_timeout": 0, 00:37:31.991 "avg_latency_us": 4697.902087485243, 00:37:31.991 "min_latency_us": 2348.3733333333334, 00:37:31.991 "max_latency_us": 11741.866666666667 00:37:31.991 } 00:37:31.991 ], 00:37:31.991 "core_count": 1 00:37:31.991 } 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:31.991 | .driver_specific 00:37:31.991 | .nvme_error 00:37:31.991 | .status_code 00:37:31.991 | .command_transient_transport_error' 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1816537 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1816537 ']' 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1816537 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.991 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816537 00:37:32.251 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.251 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.251 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816537' 00:37:32.251 killing process with pid 1816537 00:37:32.251 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1816537 00:37:32.251 Received shutdown signal, test time was about 2.000000 seconds 00:37:32.251 00:37:32.251 Latency(us) 00:37:32.251 [2024-12-09T04:30:46.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.251 [2024-12-09T04:30:46.248Z] =================================================================================================================== 00:37:32.251 [2024-12-09T04:30:46.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.251 05:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1816537 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1817404 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1817404 /var/tmp/bperf.sock 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1817404 ']' 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.512 05:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.772 [2024-12-09 05:30:46.522019] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:32.772 [2024-12-09 05:30:46.522118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817404 ] 00:37:32.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:32.772 Zero copy mechanism will not be used. 00:37:32.772 [2024-12-09 05:30:46.655780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.772 [2024-12-09 05:30:46.730675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.340 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.340 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:33.340 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.340 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.599 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.882 nvme0n1 00:37:33.882 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:33.882 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.882 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.883 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.883 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:33.883 05:30:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:33.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:33.883 Zero copy mechanism will not be used. 00:37:33.883 Running I/O for 2 seconds... 00:37:33.883 [2024-12-09 05:30:47.837824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.838020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.838053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.883 [2024-12-09 05:30:47.847918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.848188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.848214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.883 [2024-12-09 05:30:47.855672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.855955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.855979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.883 [2024-12-09 05:30:47.862411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.862482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.862503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.883 [2024-12-09 05:30:47.867097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.867159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.867179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.883 [2024-12-09 05:30:47.871658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.883 [2024-12-09 05:30:47.871726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.883 [2024-12-09 05:30:47.871747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.877548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.877615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.877635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.882719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.882799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.882824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.891038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.891103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.891123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.895874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.895946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.895966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.904152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.904228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.904247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.908578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.908644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.913209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.913285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.913305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.917839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.917901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.917921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.923864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.924148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.924168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.931456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.931768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.939537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.939842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.939869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.944737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.945022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.945043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.953651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.953714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.953734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.145 [2024-12-09 05:30:47.962285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.145 [2024-12-09 05:30:47.962356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.145 [2024-12-09 05:30:47.962379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.966244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.966310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.966330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.970076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.970156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.973713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.973784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.973804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.981134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.981194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.981214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.985353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.985442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.989344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.989409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.989429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.993268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.993331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.993351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:47.997386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:47.997444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:47.997464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.001443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.001521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.001541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.005441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.005512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.005532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.009412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.009519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.009540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.016325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.016621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.016642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.021931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.022019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.022039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.026133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.026222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.026242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.030088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.030149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.030168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.034550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.034654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.039194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.039267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.039290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.043237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.043303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.043323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.047114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.047180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.047199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.050756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.050909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.050929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.059358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.146 [2024-12-09 05:30:48.059454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.146 [2024-12-09 05:30:48.063069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.146 [2024-12-09 05:30:48.063128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.063147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.066957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.067014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.067034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.070882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.070944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.070963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.075794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.075870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.075890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.079566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.079636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.079655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.083345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.083589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.083609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.089493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.089585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.089605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.096162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.103302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.103372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.103392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.108455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.108718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.108740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.116019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.116102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.116122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.119872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.119953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.119973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.126576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.126637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.126657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.130414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.130483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.130503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.147 [2024-12-09 05:30:48.134018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.147 [2024-12-09 05:30:48.134087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.147 [2024-12-09 05:30:48.134106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.138331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.138398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.138417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.142013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.142067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.148700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.148980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.149001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.155705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.155757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.155776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.159556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.159644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.167322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.167399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.167419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.170893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.170969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.170992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.174725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.174794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.174814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.178696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.178749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.185547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.185836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.190503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.190770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.190791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.198592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.198654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.198673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.205450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.205509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.213838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.214152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.214173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.218559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.218677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.218697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.222378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.222439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.222459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.226358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.226431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.226451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.230256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.230318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.230338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.234082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.234144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.234163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.237917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.410 [2024-12-09 05:30:48.237973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.410 [2024-12-09 05:30:48.237993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.410 [2024-12-09 05:30:48.243551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.243771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.243791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.249824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.249876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.249896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.253576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.253891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.253912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.261496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.261572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.261595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.265058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.265113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.265133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.268881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.268946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.268965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.273267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.273345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.278936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.279225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.279245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.282726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.282811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.286428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.286509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.286528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.293305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.293371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.293391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.297208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.297285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.297305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.301040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.301126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.301145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.304598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.304682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.304701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.308205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.308288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.308308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.313710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.313762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.313782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.320377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.320480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.327731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.328055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.328076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.334536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.334597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.334617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.339303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.339605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.411 [2024-12-09 05:30:48.339626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.411 [2024-12-09 05:30:48.345647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.411 [2024-12-09 05:30:48.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.345733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.352264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.352576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.352596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.358656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.358731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.358751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.364220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.364287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.364307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.368658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.368952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.368973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.377468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.377537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.377557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.382623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.382923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.382944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.392926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.393223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.393243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.412 [2024-12-09 05:30:48.400595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.412 [2024-12-09 05:30:48.400865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.412 [2024-12-09 05:30:48.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.408173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.408428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.408449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.414881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.415176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.415196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.424116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.424399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.424419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.434431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.434684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.434703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.443392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.443637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.452620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.452928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.452950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.461217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.461509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.461531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.469503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.469763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.469784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.480154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.480395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.480415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.487331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.487635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.487656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.673 [2024-12-09 05:30:48.495000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.673 [2024-12-09 05:30:48.495238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.673 [2024-12-09 05:30:48.495258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.503957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.504153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.504174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.512149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.512365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.512386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.521225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.521465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.521485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.530586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.530938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.530958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.539477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.539762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.539784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.548441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.548660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.555350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.555615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.555641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.564344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.564663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.571592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.571852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.571873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.580194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.580433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.580454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.587383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.587656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.587678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.595799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.595983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.596004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.603514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.603710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.611192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.611388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.611409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.619409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.619730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.619753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.628542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.628806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.628834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.636087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.636319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.636341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.642757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.643002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.643024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.648846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.649067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.649087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.654922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.674 [2024-12-09 05:30:48.655130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.674 [2024-12-09 05:30:48.655151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.674 [2024-12-09 05:30:48.661337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.675 [2024-12-09 05:30:48.661548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.675 [2024-12-09 05:30:48.661569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.669092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.669305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.669326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.677752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.678082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.678104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.685069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.685288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.685312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.693683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.693951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.693974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.704291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.704550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.704572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.714764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.715147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.724947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.725279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.725301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.735714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.736015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.736038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.742740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.742984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.752868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.753203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.753225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.761477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.761798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.761825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.769125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.769339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.769361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.778271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.778581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.778604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.786398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.786744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.786766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.796461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.796809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.796836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.805112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.805468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.805490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.814517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.814864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.937 [2024-12-09 05:30:48.814886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.937 [2024-12-09 05:30:48.822507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.937 [2024-12-09 05:30:48.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.822845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.829646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.829865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.829886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.938 4820.00 IOPS, 602.50 MiB/s [2024-12-09T04:30:48.935Z] [2024-12-09 05:30:48.839498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.839868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.839895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.847003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.847227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.847249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.857194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.857504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.857527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.865236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.865447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.872918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.873246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.873269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.881862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.882133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.891684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.892027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.892050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.900776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.900995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.901017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.911268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.911612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.938 [2024-12-09 05:30:48.920393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.938 [2024-12-09 05:30:48.920624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.938 [2024-12-09 05:30:48.920646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.199 [2024-12-09 05:30:48.931634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.199 [2024-12-09 05:30:48.931882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.199 [2024-12-09 05:30:48.931905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.199 [2024-12-09 05:30:48.943122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.199 [2024-12-09 05:30:48.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.199 [2024-12-09 05:30:48.943373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.199 [2024-12-09 05:30:48.954361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.199 [2024-12-09 05:30:48.954650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.199 [2024-12-09 05:30:48.954672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.199 [2024-12-09 05:30:48.965490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.199 [2024-12-09 05:30:48.965755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.199 [2024-12-09 05:30:48.965778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:48.976489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:48.976759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:48.976782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:48.986764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:48.986957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:48.986979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:48.996676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:48.996960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:48.996983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.007312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.007568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.007591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.017410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.017721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.017744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.027549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.027785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.027807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.038506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.038779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.038801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.048213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.048435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.048456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.059293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.059575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.059598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.069473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.069733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.069756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.080530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.080996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.081019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.090881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.091165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.091188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.101633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.101957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.101980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.112447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.112620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.112641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.122363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.122596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.122617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.134034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.134119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.134139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.143198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.143418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.143439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.148870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.148941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.148961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.158173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.158380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.158401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.167961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.200 [2024-12-09 05:30:49.168083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.200 [2024-12-09 05:30:49.168104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.200 [2024-12-09 05:30:49.174045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.201 [2024-12-09 05:30:49.174112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.201 [2024-12-09 05:30:49.174132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.201 [2024-12-09 05:30:49.182645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.201 [2024-12-09 05:30:49.182831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.201 [2024-12-09 05:30:49.182852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.201 [2024-12-09 05:30:49.190318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.201 [2024-12-09 05:30:49.190383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.201 [2024-12-09 05:30:49.190403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.199679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.199960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.199982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.208199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.208444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.208465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.217737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.218039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.218061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.228537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.228807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.228835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.239696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.239946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.239968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.250323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.250576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.250597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.261319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.261576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.261602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.272029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.272352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.272372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.282708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.283021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.293126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.293421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.293443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.303766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.304050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.304072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.314140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.314476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.314497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.324458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.324760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.462 [2024-12-09 05:30:49.324782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.462 [2024-12-09 05:30:49.335428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.462 [2024-12-09 05:30:49.335687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.335708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.346263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.346366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.346387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.357746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.358017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.358038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.368929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.369244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.369266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.378768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.378845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.378866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.388414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.388695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.388716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.395506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.395756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.395777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.403433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.403613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.403632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.413514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.413811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.413837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.420452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.420508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.420529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.428828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.429130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.429154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.436791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.436989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.463 [2024-12-09 05:30:49.446295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.463 [2024-12-09 05:30:49.446577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.463 [2024-12-09 05:30:49.446598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.456767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.456829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.456849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.468364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.468659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.468681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.479866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.480159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.480180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.491281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.491482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.491503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.502431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.502710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.502731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.514186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.514451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.514472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.525607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.525856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.525877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.536874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.537125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.537145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.548391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.548464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.548485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.560663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.560958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.571679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.724 [2024-12-09 05:30:49.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.724 [2024-12-09 05:30:49.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.724 [2024-12-09 05:30:49.582950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.583238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.583260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.594288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.594540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.594562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.605981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.606164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.606185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.615538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.615780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.615801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.626325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.626433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.626454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.637225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.637525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.637546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.648315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.648606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.648628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.659946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.660185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.660207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.670596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.670874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.670894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.679765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.679972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.679993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.690218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.690301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.690321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.701050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.701453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.701475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.725 [2024-12-09 05:30:49.712511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.725 [2024-12-09 05:30:49.712823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.725 [2024-12-09 05:30:49.712848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.723893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.724202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.724223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.736127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.736432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.736453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.747403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.747459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.747480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.755484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.755792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.755812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.765959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.766099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.766119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.774658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.774810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.774837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.784966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.785034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.785054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.794701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.794961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.987 [2024-12-09 05:30:49.803432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.987 [2024-12-09 05:30:49.803708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.987 [2024-12-09 05:30:49.803730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.988 [2024-12-09 05:30:49.810223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.988 [2024-12-09 05:30:49.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.988 [2024-12-09 05:30:49.810310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.988 [2024-12-09 05:30:49.819264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.988 [2024-12-09 05:30:49.819560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.988 [2024-12-09 05:30:49.819583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.988 [2024-12-09 05:30:49.825543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.988 [2024-12-09 05:30:49.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.988 [2024-12-09 05:30:49.825642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.988 [2024-12-09 05:30:49.833864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:35.988 [2024-12-09 05:30:49.834040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.988 [2024-12-09 05:30:49.834060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.988 3955.00 IOPS, 494.38 MiB/s 00:37:35.988 Latency(us) 00:37:35.988 [2024-12-09T04:30:49.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.988 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:35.988 nvme0n1 : 2.01 3952.33 494.04 0.00 0.00 4040.12 1536.00 12451.84 00:37:35.988 [2024-12-09T04:30:49.985Z] =================================================================================================================== 00:37:35.988 [2024-12-09T04:30:49.985Z] Total : 3952.33 494.04 0.00 0.00 4040.12 1536.00 12451.84 00:37:35.988 { 00:37:35.988 "results": [ 00:37:35.988 { 00:37:35.988 "job": "nvme0n1", 00:37:35.988 "core_mask": "0x2", 00:37:35.988 "workload": "randwrite", 00:37:35.988 "status": "finished", 00:37:35.988 "queue_depth": 16, 00:37:35.988 "io_size": 131072, 00:37:35.988 "runtime": 2.006411, 00:37:35.988 "iops": 3952.330803609031, 00:37:35.988 "mibps": 494.0413504511289, 00:37:35.988 "io_failed": 0, 00:37:35.988 "io_timeout": 0, 00:37:35.988 "avg_latency_us": 4040.1199024800335, 00:37:35.988 "min_latency_us": 1536.0, 00:37:35.988 "max_latency_us": 12451.84 00:37:35.988 } 00:37:35.988 ], 00:37:35.988 "core_count": 1 00:37:35.988 } 00:37:35.988 05:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:35.988 05:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:35.988 05:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:35.988 | .driver_specific 00:37:35.988 | .nvme_error 00:37:35.988 | .status_code 00:37:35.988 | .command_transient_transport_error' 00:37:35.988 05:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 256 > 0 )) 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1817404 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1817404 ']' 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1817404 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1817404 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1817404' 00:37:36.249 killing process with pid 1817404 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1817404 00:37:36.249 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.249 00:37:36.249 Latency(us) 00:37:36.249 [2024-12-09T04:30:50.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.249 [2024-12-09T04:30:50.246Z] =================================================================================================================== 00:37:36.249 [2024-12-09T04:30:50.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.249 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1817404 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1814804 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1814804 ']' 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1814804 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814804 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814804' 00:37:36.820 killing process with pid 1814804 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1814804 00:37:36.820 05:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1814804 00:37:37.390 00:37:37.390 real 0m18.354s 00:37:37.390 user 0m35.562s 00:37:37.390 sys 0m3.758s 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.390 ************************************ 00:37:37.390 END TEST nvmf_digest_error 00:37:37.390 ************************************ 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:37.390 rmmod nvme_tcp 00:37:37.390 rmmod nvme_fabrics 00:37:37.390 rmmod nvme_keyring 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1814804 ']' 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1814804 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1814804 ']' 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1814804 00:37:37.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1814804) - No such process 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1814804 is not found' 00:37:37.390 Process with pid 1814804 is not found 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.390 05:30:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.937 00:37:39.937 real 0m48.147s 00:37:39.937 user 1m15.292s 00:37:39.937 sys 0m13.886s 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:39.937 ************************************ 00:37:39.937 END TEST nvmf_digest 00:37:39.937 ************************************ 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.937 ************************************ 00:37:39.937 START TEST nvmf_bdevperf 00:37:39.937 ************************************ 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:39.937 * Looking for test storage... 00:37:39.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.937 --rc genhtml_branch_coverage=1 00:37:39.937 --rc genhtml_function_coverage=1 00:37:39.937 --rc genhtml_legend=1 00:37:39.937 --rc geninfo_all_blocks=1 00:37:39.937 --rc geninfo_unexecuted_blocks=1 00:37:39.937 00:37:39.937 ' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.937 --rc genhtml_branch_coverage=1 00:37:39.937 --rc genhtml_function_coverage=1 00:37:39.937 --rc genhtml_legend=1 00:37:39.937 --rc geninfo_all_blocks=1 00:37:39.937 --rc geninfo_unexecuted_blocks=1 00:37:39.937 00:37:39.937 ' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.937 --rc genhtml_branch_coverage=1 00:37:39.937 --rc genhtml_function_coverage=1 00:37:39.937 --rc genhtml_legend=1 00:37:39.937 --rc geninfo_all_blocks=1 00:37:39.937 --rc geninfo_unexecuted_blocks=1 00:37:39.937 00:37:39.937 ' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.937 --rc genhtml_branch_coverage=1 00:37:39.937 --rc genhtml_function_coverage=1 00:37:39.937 --rc genhtml_legend=1 00:37:39.937 --rc geninfo_all_blocks=1 00:37:39.937 --rc geninfo_unexecuted_blocks=1 00:37:39.937 00:37:39.937 ' 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.937 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:39.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.938 05:30:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:48.075 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:48.076 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:48.076 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:48.076 Found net devices under 0000:31:00.0: cvl_0_0 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:48.076 Found net devices under 0000:31:00.1: cvl_0_1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:48.076 05:31:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:48.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:48.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:37:48.076 00:37:48.076 --- 10.0.0.2 ping statistics --- 00:37:48.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.076 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:48.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:48.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:37:48.076 00:37:48.076 --- 10.0.0.1 ping statistics --- 00:37:48.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.076 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1822494 00:37:48.076 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1822494 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1822494 ']' 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.077 05:31:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.077 [2024-12-09 05:31:01.259950] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:48.077 [2024-12-09 05:31:01.260049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.077 [2024-12-09 05:31:01.409521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:48.077 [2024-12-09 05:31:01.513308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.077 [2024-12-09 05:31:01.513354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.077 [2024-12-09 05:31:01.513367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.077 [2024-12-09 05:31:01.513379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.077 [2024-12-09 05:31:01.513389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.077 [2024-12-09 05:31:01.515539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:48.077 [2024-12-09 05:31:01.515635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.077 [2024-12-09 05:31:01.515659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.077 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.338 [2024-12-09 05:31:02.073694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.338 Malloc0 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.338 [2024-12-09 05:31:02.190606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:48.338 { 00:37:48.338 "params": { 00:37:48.338 "name": "Nvme$subsystem", 00:37:48.338 "trtype": "$TEST_TRANSPORT", 00:37:48.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.338 "adrfam": "ipv4", 00:37:48.338 "trsvcid": "$NVMF_PORT", 00:37:48.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.338 "hdgst": ${hdgst:-false}, 00:37:48.338 "ddgst": ${ddgst:-false} 00:37:48.338 }, 00:37:48.338 "method": "bdev_nvme_attach_controller" 00:37:48.338 } 00:37:48.338 EOF 00:37:48.338 )") 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:48.338 05:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:48.338 "params": { 00:37:48.338 "name": "Nvme1", 00:37:48.338 "trtype": "tcp", 00:37:48.338 "traddr": "10.0.0.2", 00:37:48.338 "adrfam": "ipv4", 00:37:48.338 "trsvcid": "4420", 00:37:48.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.338 "hdgst": false, 00:37:48.338 "ddgst": false 00:37:48.338 }, 00:37:48.338 "method": "bdev_nvme_attach_controller" 00:37:48.338 }' 00:37:48.338 [2024-12-09 05:31:02.285032] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:48.338 [2024-12-09 05:31:02.285147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822631 ] 00:37:48.599 [2024-12-09 05:31:02.438966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.599 [2024-12-09 05:31:02.565337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.171 Running I/O for 1 seconds... 00:37:50.110 8239.00 IOPS, 32.18 MiB/s 00:37:50.110 Latency(us) 00:37:50.110 [2024-12-09T04:31:04.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.110 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:50.110 Verification LBA range: start 0x0 length 0x4000 00:37:50.111 Nvme1n1 : 1.01 8323.42 32.51 0.00 0.00 15286.54 1652.05 13707.95 00:37:50.111 [2024-12-09T04:31:04.108Z] =================================================================================================================== 00:37:50.111 [2024-12-09T04:31:04.108Z] Total : 8323.42 32.51 0.00 0.00 15286.54 1652.05 13707.95 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1823084 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:50.681 { 00:37:50.681 "params": { 00:37:50.681 "name": "Nvme$subsystem", 00:37:50.681 "trtype": "$TEST_TRANSPORT", 00:37:50.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.681 "adrfam": "ipv4", 00:37:50.681 "trsvcid": "$NVMF_PORT", 00:37:50.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.681 "hdgst": ${hdgst:-false}, 00:37:50.681 "ddgst": ${ddgst:-false} 00:37:50.681 }, 00:37:50.681 "method": "bdev_nvme_attach_controller" 00:37:50.681 } 00:37:50.681 EOF 00:37:50.681 )") 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:50.681 05:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:50.681 "params": { 00:37:50.681 "name": "Nvme1", 00:37:50.681 "trtype": "tcp", 00:37:50.681 "traddr": "10.0.0.2", 00:37:50.681 "adrfam": "ipv4", 00:37:50.681 "trsvcid": "4420", 00:37:50.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:50.681 "hdgst": false, 00:37:50.681 "ddgst": false 00:37:50.681 }, 00:37:50.681 "method": "bdev_nvme_attach_controller" 00:37:50.681 }' 00:37:50.681 [2024-12-09 05:31:04.638122] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:50.681 [2024-12-09 05:31:04.638232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823084 ] 00:37:50.941 [2024-12-09 05:31:04.784105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.941 [2024-12-09 05:31:04.882179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.511 Running I/O for 15 seconds... 00:37:53.833 9993.00 IOPS, 39.04 MiB/s [2024-12-09T04:31:07.830Z] 10089.00 IOPS, 39.41 MiB/s [2024-12-09T04:31:07.830Z] 05:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1822494 00:37:53.833 05:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:53.833 [2024-12-09 05:31:07.587573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.587981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.587994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.833 [2024-12-09 05:31:07.588457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.833 [2024-12-09 05:31:07.588480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.833 [2024-12-09 05:31:07.588494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.588986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.588999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.834 [2024-12-09 05:31:07.589427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.834 [2024-12-09 05:31:07.589437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.589985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.589996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.835 [2024-12-09 05:31:07.590326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.835 [2024-12-09 05:31:07.590338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.836 [2024-12-09 05:31:07.590703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.590718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000394700 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.590732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:53.836 [2024-12-09 05:31:07.590742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:53.836 [2024-12-09 05:31:07.590753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42304 len:8 PRP1 0x0 PRP2 0x0 00:37:53.836 [2024-12-09 05:31:07.590765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.591031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:53.836 [2024-12-09 05:31:07.591052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.591066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:53.836 [2024-12-09 05:31:07.591077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.591088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:53.836 [2024-12-09 05:31:07.591098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.591109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:53.836 [2024-12-09 05:31:07.591120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:53.836 [2024-12-09 05:31:07.591130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.594915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.836 [2024-12-09 05:31:07.594962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.836 [2024-12-09 05:31:07.595797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.836 [2024-12-09 05:31:07.595829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.836 [2024-12-09 05:31:07.595842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.596087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.836 [2024-12-09 05:31:07.596327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.836 [2024-12-09 05:31:07.596341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.836 [2024-12-09 05:31:07.596353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.836 [2024-12-09 05:31:07.596366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.836 [2024-12-09 05:31:07.609234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.836 [2024-12-09 05:31:07.609821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.836 [2024-12-09 05:31:07.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.836 [2024-12-09 05:31:07.609858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.610096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.836 [2024-12-09 05:31:07.610334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.836 [2024-12-09 05:31:07.610347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.836 [2024-12-09 05:31:07.610357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.836 [2024-12-09 05:31:07.610367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.836 [2024-12-09 05:31:07.623422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.836 [2024-12-09 05:31:07.624117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.836 [2024-12-09 05:31:07.624167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.836 [2024-12-09 05:31:07.624183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.624458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.836 [2024-12-09 05:31:07.624702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.836 [2024-12-09 05:31:07.624716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.836 [2024-12-09 05:31:07.624727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.836 [2024-12-09 05:31:07.624739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.836 [2024-12-09 05:31:07.637614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.836 [2024-12-09 05:31:07.638327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.836 [2024-12-09 05:31:07.638379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.836 [2024-12-09 05:31:07.638394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.836 [2024-12-09 05:31:07.638667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.836 [2024-12-09 05:31:07.638923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.836 [2024-12-09 05:31:07.638939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.836 [2024-12-09 05:31:07.638950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.836 [2024-12-09 05:31:07.638962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.836 [2024-12-09 05:31:07.651821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.836 [2024-12-09 05:31:07.652515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.652567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.652583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.652869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.653115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.653129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.653140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.653152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.666029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.666607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.666661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.666677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.666968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.667213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.667227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.667239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.667251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.680130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.680843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.680897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.680914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.681199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.681445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.681459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.681470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.681482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.694175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.694895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.694951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.694967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.695245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.695492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.695506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.695517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.695529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.708420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.709163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.709222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.709239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.709519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.709766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.709781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.709792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.709806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.722490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.723258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.723321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.723338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.723632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.723898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.723914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.723925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.723938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.837 [2024-12-09 05:31:07.736710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.837 [2024-12-09 05:31:07.737439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.837 [2024-12-09 05:31:07.737508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.837 [2024-12-09 05:31:07.737526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.837 [2024-12-09 05:31:07.737831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.837 [2024-12-09 05:31:07.738080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.837 [2024-12-09 05:31:07.738095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.837 [2024-12-09 05:31:07.738108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.837 [2024-12-09 05:31:07.738121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.750862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.751556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.751593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.751607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.751863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.838 [2024-12-09 05:31:07.752108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.838 [2024-12-09 05:31:07.752122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.838 [2024-12-09 05:31:07.752133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.838 [2024-12-09 05:31:07.752145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.765086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.765858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.765934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.765953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.766247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.838 [2024-12-09 05:31:07.766495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.838 [2024-12-09 05:31:07.766511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.838 [2024-12-09 05:31:07.766531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.838 [2024-12-09 05:31:07.766544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.779272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.780006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.780082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.780100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.780394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.838 [2024-12-09 05:31:07.780644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.838 [2024-12-09 05:31:07.780659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.838 [2024-12-09 05:31:07.780671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.838 [2024-12-09 05:31:07.780685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.793464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.794179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.794216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.794229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.794475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.838 [2024-12-09 05:31:07.794718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.838 [2024-12-09 05:31:07.794735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.838 [2024-12-09 05:31:07.794746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.838 [2024-12-09 05:31:07.794757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.807684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.808319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.808354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.808367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.808609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:53.838 [2024-12-09 05:31:07.808860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.838 [2024-12-09 05:31:07.808881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.838 [2024-12-09 05:31:07.808892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.838 [2024-12-09 05:31:07.808903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.838 [2024-12-09 05:31:07.821873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.838 [2024-12-09 05:31:07.822652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.838 [2024-12-09 05:31:07.822728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:53.838 [2024-12-09 05:31:07.822747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:53.838 [2024-12-09 05:31:07.823060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.101 [2024-12-09 05:31:07.823310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.101 [2024-12-09 05:31:07.823332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.101 [2024-12-09 05:31:07.823347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.101 [2024-12-09 05:31:07.823362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.101 [2024-12-09 05:31:07.836125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.101 [2024-12-09 05:31:07.836867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.101 [2024-12-09 05:31:07.836943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.101 [2024-12-09 05:31:07.836962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.101 [2024-12-09 05:31:07.837257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.101 [2024-12-09 05:31:07.837505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.101 [2024-12-09 05:31:07.837520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.101 [2024-12-09 05:31:07.837533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.101 [2024-12-09 05:31:07.837546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.101 [2024-12-09 05:31:07.850317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.101 [2024-12-09 05:31:07.851105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.101 [2024-12-09 05:31:07.851180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.851198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.851493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.851743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.851758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.851772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.851785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.864584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.865245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.865286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.865300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.865543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.865785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.865800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.865811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.865830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.878195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.878793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.878827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.878837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.879005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.879172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.879182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.879191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.879199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.891171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.891672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.891696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.891705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.891884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.892051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.892061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.892070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.892078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.904015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.904624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.904686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.904903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.905076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.905088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.905097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.905106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.916913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.917510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.917569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.917766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.917947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.917968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.917976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.917985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.929745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.930285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.930327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.930339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.930533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.930702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.930712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.930720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.930729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.942653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.943523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.943550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.943560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.943737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.943913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.943940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.943948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.943956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.955549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.956101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.956122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.956130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.956295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.956460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.956470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.956478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.102 [2024-12-09 05:31:07.956485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.102 [2024-12-09 05:31:07.968382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.102 [2024-12-09 05:31:07.968876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.102 [2024-12-09 05:31:07.968895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.102 [2024-12-09 05:31:07.968904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.102 [2024-12-09 05:31:07.969068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.102 [2024-12-09 05:31:07.969233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.102 [2024-12-09 05:31:07.969242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.102 [2024-12-09 05:31:07.969249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:07.969257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:07.981180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:07.981669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:07.981688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:07.981696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:07.981866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:07.982031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:07.982040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:07.982047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:07.982058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:07.994101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:07.994583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:07.994601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:07.994609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:07.994772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:07.994942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:07.994952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:07.994960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:07.994967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.006980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.007476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.007494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.007502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.007665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.007834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.007843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.007851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.007858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.019870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.020371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.020389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.020396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.020559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.020723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.020731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.020738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.020745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.032757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.033374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.033412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.033423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.033612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.033780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.033791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.033799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.033807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.045692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.046114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.046142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.046307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.046471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.046480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.046487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.046494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.058548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.059037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.059075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.059087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.059277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.059445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.059455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.059464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.059472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.071351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.071928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.071966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.071981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.072172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.072340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.072351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.072359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.072368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.103 [2024-12-09 05:31:08.084235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.103 [2024-12-09 05:31:08.084774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.103 [2024-12-09 05:31:08.084794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.103 [2024-12-09 05:31:08.084802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.103 [2024-12-09 05:31:08.084972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.103 [2024-12-09 05:31:08.085137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.103 [2024-12-09 05:31:08.085145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.103 [2024-12-09 05:31:08.085152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.103 [2024-12-09 05:31:08.085159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.366 [2024-12-09 05:31:08.097044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.366 [2024-12-09 05:31:08.097529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.366 [2024-12-09 05:31:08.097547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.366 [2024-12-09 05:31:08.097555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.366 [2024-12-09 05:31:08.097719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.366 [2024-12-09 05:31:08.097887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.366 [2024-12-09 05:31:08.097896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.366 [2024-12-09 05:31:08.097903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.366 [2024-12-09 05:31:08.097933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.366 [2024-12-09 05:31:08.109952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.366 [2024-12-09 05:31:08.110581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.366 [2024-12-09 05:31:08.110619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.366 [2024-12-09 05:31:08.110630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.366 [2024-12-09 05:31:08.110825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.366 [2024-12-09 05:31:08.110998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.366 [2024-12-09 05:31:08.111008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.366 [2024-12-09 05:31:08.111016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.366 [2024-12-09 05:31:08.111024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.366 [2024-12-09 05:31:08.122731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.366 [2024-12-09 05:31:08.123343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.366 [2024-12-09 05:31:08.123380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.366 [2024-12-09 05:31:08.123393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.366 [2024-12-09 05:31:08.123583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.366 [2024-12-09 05:31:08.123751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.366 [2024-12-09 05:31:08.123762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.366 [2024-12-09 05:31:08.123770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.366 [2024-12-09 05:31:08.123779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.366 [2024-12-09 05:31:08.135646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.366 [2024-12-09 05:31:08.135945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.366 [2024-12-09 05:31:08.135965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.366 [2024-12-09 05:31:08.135974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.366 [2024-12-09 05:31:08.136140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.366 [2024-12-09 05:31:08.136304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.136313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.136320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.136327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.148496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.149150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.149188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.149199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.149388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.149556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.149571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.149579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.149587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.161309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.161677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.161696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.161705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.161873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.162038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.162048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.162055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.162062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.174217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.174628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.174645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.174653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.174820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.174984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.174993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.175000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.175007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.187016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.187550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.187567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.187575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.187738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.187906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.187915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.187923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.187933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.199929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.200393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.200410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.200418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.200580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.200743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.200752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.200759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.200766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.212760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.213351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.213388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.213400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.213589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.213757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.213767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.213775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.213783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.225679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.226068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.226087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.226096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.226260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.226425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.226434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.226442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.226449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.238616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.239471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.239496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.239505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.239678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.239849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.239860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.239868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.239876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.251437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.252224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.367 [2024-12-09 05:31:08.252256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.367 [2024-12-09 05:31:08.252428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.367 [2024-12-09 05:31:08.252593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.367 [2024-12-09 05:31:08.252602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.367 [2024-12-09 05:31:08.252609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.367 [2024-12-09 05:31:08.252616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.367 [2024-12-09 05:31:08.264353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.367 [2024-12-09 05:31:08.264834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.367 [2024-12-09 05:31:08.264852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.264861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.265024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.265189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.265198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.265205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.265212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.277240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.277814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.277856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.277871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.278060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.278228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.278238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.278246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.278255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.290143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.290774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.290812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.290830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.291020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.291189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.291199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.291207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.291216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.303087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.303603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.303623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.303631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.303795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.303966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.303977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.303984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.303991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.316007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.316632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.316669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.316681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.316876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.317047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.317057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.317066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.317075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.328936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.329455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.329474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.329482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.329647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.329811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.329823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.329830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.329837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.341845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.342358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.342385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.342548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.342711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.342720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.342727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.342734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.368 [2024-12-09 05:31:08.354742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.368 [2024-12-09 05:31:08.355356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.368 [2024-12-09 05:31:08.355393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.368 [2024-12-09 05:31:08.355405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.368 [2024-12-09 05:31:08.355594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.368 [2024-12-09 05:31:08.355761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.368 [2024-12-09 05:31:08.355771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.368 [2024-12-09 05:31:08.355783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.368 [2024-12-09 05:31:08.355791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.367664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.368200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.368209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.368373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.368538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.368546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.368554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.368561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.380581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.381207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.381244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.381255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.381444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.381611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.381630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.381639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.381648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.393374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.394066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.394103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.394115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.394304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.394471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.394482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.394490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.394500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.406214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.406740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.406760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.406769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.406938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.407103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.407112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.407119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.407126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.419121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.419649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.419669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.419677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.419845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.420009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.420018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.420026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.420032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 7182.00 IOPS, 28.05 MiB/s [2024-12-09T04:31:08.628Z] [2024-12-09 05:31:08.433222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.433870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.433907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.433920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.434112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.434280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.434291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.434299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.434307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.446022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.446636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.446677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.446688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.446884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.447053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.447062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.447070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.447078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.458955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.459571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.459609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.459620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.459809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.459984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.631 [2024-12-09 05:31:08.459994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.631 [2024-12-09 05:31:08.460002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.631 [2024-12-09 05:31:08.460011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.631 [2024-12-09 05:31:08.471894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.631 [2024-12-09 05:31:08.472520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.631 [2024-12-09 05:31:08.472557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.631 [2024-12-09 05:31:08.472575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.631 [2024-12-09 05:31:08.472766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.631 [2024-12-09 05:31:08.472940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.472951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.472959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.472968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.484682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.485278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.485315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.485326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.485519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.485687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.485698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.485706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.485715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.497603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.498164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.498201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.498213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.498402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.498570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.498581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.498589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.498598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.510458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.510983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.510992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.511156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.511320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.511329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.511336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.511343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.523340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.523867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.523886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.523894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.524058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.524225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.524234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.524241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.524248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.536255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.536911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.536949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.536961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.537153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.537321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.537331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.537339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.537348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.549058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.549693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.549742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.549938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.550107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.550117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.550125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.550133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.562040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.562544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.562564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.562573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.562737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.562906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.562916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.562928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.562935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.574953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.575581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.575619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.575631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.575831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.575999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.576009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.576018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.576027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.587888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.588558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.588571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.588774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.632 [2024-12-09 05:31:08.588950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.632 [2024-12-09 05:31:08.588961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.632 [2024-12-09 05:31:08.588968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.632 [2024-12-09 05:31:08.588977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.632 [2024-12-09 05:31:08.600690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.632 [2024-12-09 05:31:08.601316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.632 [2024-12-09 05:31:08.601354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.632 [2024-12-09 05:31:08.601365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.632 [2024-12-09 05:31:08.601554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.633 [2024-12-09 05:31:08.601722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.633 [2024-12-09 05:31:08.601733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.633 [2024-12-09 05:31:08.601741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.633 [2024-12-09 05:31:08.601750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.633 [2024-12-09 05:31:08.613713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.633 [2024-12-09 05:31:08.614099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.633 [2024-12-09 05:31:08.614119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.633 [2024-12-09 05:31:08.614128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.633 [2024-12-09 05:31:08.614292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.633 [2024-12-09 05:31:08.614456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.633 [2024-12-09 05:31:08.614465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.633 [2024-12-09 05:31:08.614472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.633 [2024-12-09 05:31:08.614479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.626630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.626993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.627013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.627021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.627186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.627350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.627359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.895 [2024-12-09 05:31:08.627366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.895 [2024-12-09 05:31:08.627373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.639525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.640214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.640252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.640263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.640452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.640620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.640631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.895 [2024-12-09 05:31:08.640639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.895 [2024-12-09 05:31:08.640648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.652359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.653051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.653092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.653104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.653292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.653466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.653477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.895 [2024-12-09 05:31:08.653485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.895 [2024-12-09 05:31:08.653494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.665214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.665872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.665910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.665922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.666112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.666280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.666290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.895 [2024-12-09 05:31:08.666298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.895 [2024-12-09 05:31:08.666307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.678024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.678537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.678574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.678586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.678776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.678951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.678962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.895 [2024-12-09 05:31:08.678970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.895 [2024-12-09 05:31:08.678978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.895 [2024-12-09 05:31:08.690850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.895 [2024-12-09 05:31:08.691430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.895 [2024-12-09 05:31:08.691467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.895 [2024-12-09 05:31:08.691478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.895 [2024-12-09 05:31:08.691671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.895 [2024-12-09 05:31:08.691846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.895 [2024-12-09 05:31:08.691856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.691866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.691875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.703733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.704245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.704257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.704445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.704613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.704623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.704631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.704639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.716653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.717257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.717294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.717306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.717495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.717663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.717673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.717681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.717690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.729553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.730162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.730199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.730210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.730399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.730567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.730582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.730590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.730600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.742457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.743072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.743109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.743121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.743310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.743477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.743487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.743495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.743504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.755363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.755901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.755938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.755951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.756143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.756311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.756321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.756329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.756338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.768206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.768865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.768902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.768915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.769108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.769275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.769286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.769294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.769306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.781014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.781629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.781667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.781678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.781874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.896 [2024-12-09 05:31:08.782043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.896 [2024-12-09 05:31:08.782053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.896 [2024-12-09 05:31:08.782061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.896 [2024-12-09 05:31:08.782069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.896 [2024-12-09 05:31:08.793940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.896 [2024-12-09 05:31:08.794516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.896 [2024-12-09 05:31:08.794554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.896 [2024-12-09 05:31:08.794565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.896 [2024-12-09 05:31:08.794754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.794929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.794940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.794948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.794956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.806810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.807427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.807464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.807476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.807665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.807840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.807850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.807859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.807868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.819712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.820358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.820396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.820407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.820596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.820764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.820775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.820783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.820792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.832645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.833103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.833141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.833152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.833341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.833509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.833520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.833535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.833544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.845547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.845959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.845997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.846009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.846200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.846368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.846377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.846385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.846394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.858401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.858947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.858984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.859000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.859192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.859360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.859371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.859379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.859388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.871253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.871926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.871964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.871975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.872164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.872331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.872341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.872350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.872358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.897 [2024-12-09 05:31:08.884084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.897 [2024-12-09 05:31:08.884612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.897 [2024-12-09 05:31:08.884631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:54.897 [2024-12-09 05:31:08.884640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:54.897 [2024-12-09 05:31:08.884804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:54.897 [2024-12-09 05:31:08.884973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.897 [2024-12-09 05:31:08.884983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.897 [2024-12-09 05:31:08.884990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.897 [2024-12-09 05:31:08.884997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.182 [2024-12-09 05:31:08.897006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.182 [2024-12-09 05:31:08.897475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.182 [2024-12-09 05:31:08.897495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.182 [2024-12-09 05:31:08.897503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.182 [2024-12-09 05:31:08.897668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.182 [2024-12-09 05:31:08.897858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.182 [2024-12-09 05:31:08.897873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.182 [2024-12-09 05:31:08.897884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.182 [2024-12-09 05:31:08.897895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.182 [2024-12-09 05:31:08.909881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.182 [2024-12-09 05:31:08.910495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.182 [2024-12-09 05:31:08.910533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.182 [2024-12-09 05:31:08.910544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.182 [2024-12-09 05:31:08.910733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.182 [2024-12-09 05:31:08.910908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.182 [2024-12-09 05:31:08.910919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.182 [2024-12-09 05:31:08.910926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.182 [2024-12-09 05:31:08.910935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.182 [2024-12-09 05:31:08.922799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.182 [2024-12-09 05:31:08.923405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.182 [2024-12-09 05:31:08.923442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.182 [2024-12-09 05:31:08.923453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.182 [2024-12-09 05:31:08.923642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.182 [2024-12-09 05:31:08.923810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.182 [2024-12-09 05:31:08.923828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.182 [2024-12-09 05:31:08.923836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.923845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:08.935704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:08.936345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:08.936383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:08.936395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:08.936585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:08.936753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:08.936767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:08.936776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.936785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:08.948577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:08.949208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:08.949246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:08.949257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:08.949446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:08.949614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:08.949623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:08.949631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.949640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:08.961494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:08.962129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:08.962167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:08.962178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:08.962367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:08.962534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:08.962545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:08.962553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.962561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:08.974282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:08.974788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:08.974808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:08.974823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:08.974987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:08.975152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:08.975160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:08.975167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.975177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:08.987169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:08.987784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:08.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:08.987837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:08.988026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:08.988194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:08.988203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:08.988211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:08.988220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:09.000093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:09.000714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:09.000751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:09.000763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:09.000958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:09.001126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:09.001136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:09.001145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:09.001154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:09.013025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:09.013650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:09.013687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:09.013698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:09.013895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:09.014063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:09.014073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:09.014081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:09.014089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:09.025947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:09.026576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:09.026613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:09.026624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:09.026813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:09.026989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:09.026999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:09.027007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:09.027016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:09.038873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:09.039479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:09.039517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:09.039528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:09.039716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.183 [2024-12-09 05:31:09.039891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.183 [2024-12-09 05:31:09.039902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.183 [2024-12-09 05:31:09.039910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.183 [2024-12-09 05:31:09.039918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.183 [2024-12-09 05:31:09.051773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.183 [2024-12-09 05:31:09.052398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.183 [2024-12-09 05:31:09.052435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.183 [2024-12-09 05:31:09.052446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.183 [2024-12-09 05:31:09.052635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.052803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.052813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.052830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.052838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.064693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.065332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.065370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.065385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.065575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.065743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.065753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.065761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.065770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.077490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.077990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.078010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.078018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.078183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.078347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.078356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.078363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.078370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.090403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.090891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.090909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.090918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.091082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.091245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.091254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.091261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.091268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.103283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.103833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.103872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.103884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.104076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.104247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.104257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.104266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.104274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.116143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.116698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.116710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.116908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.117077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.117086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.117095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.117104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.128959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.129617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.129629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.129825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.129993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.130002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.130011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.130020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.141878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.142405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.142424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.142433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.142596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.142760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.142769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.142779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.142786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.154794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.155406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.155443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.155454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.155643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.155811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.155826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.155834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.155843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.184 [2024-12-09 05:31:09.167710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.184 [2024-12-09 05:31:09.168319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.184 [2024-12-09 05:31:09.168357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.184 [2024-12-09 05:31:09.168368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.184 [2024-12-09 05:31:09.168557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.184 [2024-12-09 05:31:09.168724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.184 [2024-12-09 05:31:09.168734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.184 [2024-12-09 05:31:09.168742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.184 [2024-12-09 05:31:09.168752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.180622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.181122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.181158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.181169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.181358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.181526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.181535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.181544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.446 [2024-12-09 05:31:09.181552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.193468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.193957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.193977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.193986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.194150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.194314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.194324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.194330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.446 [2024-12-09 05:31:09.194337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.206346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.206934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.206977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.206989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.207179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.207346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.207357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.207365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.446 [2024-12-09 05:31:09.207374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.219402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.220085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.220123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.220134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.220323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.220491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.220500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.220509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.446 [2024-12-09 05:31:09.220517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.232219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.232830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.232882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.233074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.233242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.233251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.233259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.446 [2024-12-09 05:31:09.233268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.446 [2024-12-09 05:31:09.245138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.446 [2024-12-09 05:31:09.245734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.446 [2024-12-09 05:31:09.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.446 [2024-12-09 05:31:09.245783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.446 [2024-12-09 05:31:09.245979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.446 [2024-12-09 05:31:09.246148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.446 [2024-12-09 05:31:09.246159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.446 [2024-12-09 05:31:09.246167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.246176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.258042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.258671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.258719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.258914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.259082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.259092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.259100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.259109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.270968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.271590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.271628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.271639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.271839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.272008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.272017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.272025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.272034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.283901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.284568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.284579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.284768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.284943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.284954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.284962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.284971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.296686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.297307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.297355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.297543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.297711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.297720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.297729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.297738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.309613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.310226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.310263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.310275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.310464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.310631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.310644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.310652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.310661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.322517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.323027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.323048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.323056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.323221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.323384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.323394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.323401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.323408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.335411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.335881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.335900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.335908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.336071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.336235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.336244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.336251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.336258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.348260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.348874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.348911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.348924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.349114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.349281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.349291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.349304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.349313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.361176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.361686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.361706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.361715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.361885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.362049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.447 [2024-12-09 05:31:09.362058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.447 [2024-12-09 05:31:09.362065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.447 [2024-12-09 05:31:09.362073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.447 [2024-12-09 05:31:09.374074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.447 [2024-12-09 05:31:09.374665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.447 [2024-12-09 05:31:09.374702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.447 [2024-12-09 05:31:09.374714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.447 [2024-12-09 05:31:09.374910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.447 [2024-12-09 05:31:09.375078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.448 [2024-12-09 05:31:09.375088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.448 [2024-12-09 05:31:09.375096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.448 [2024-12-09 05:31:09.375104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.448 [2024-12-09 05:31:09.386974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.448 [2024-12-09 05:31:09.387613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.448 [2024-12-09 05:31:09.387651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.448 [2024-12-09 05:31:09.387662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.448 [2024-12-09 05:31:09.387864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.448 [2024-12-09 05:31:09.388032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.448 [2024-12-09 05:31:09.388042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.448 [2024-12-09 05:31:09.388050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.448 [2024-12-09 05:31:09.388058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.448 [2024-12-09 05:31:09.399793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.448 [2024-12-09 05:31:09.400424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.448 [2024-12-09 05:31:09.400461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.448 [2024-12-09 05:31:09.400473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.448 [2024-12-09 05:31:09.400662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.448 [2024-12-09 05:31:09.400835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.448 [2024-12-09 05:31:09.400845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.448 [2024-12-09 05:31:09.400853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.448 [2024-12-09 05:31:09.400861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.448 [2024-12-09 05:31:09.412724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.448 [2024-12-09 05:31:09.413307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.448 [2024-12-09 05:31:09.413345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.448 [2024-12-09 05:31:09.413356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.448 [2024-12-09 05:31:09.413545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.448 [2024-12-09 05:31:09.413712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.448 [2024-12-09 05:31:09.413723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.448 [2024-12-09 05:31:09.413731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.448 [2024-12-09 05:31:09.413740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.448 [2024-12-09 05:31:09.425617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.448 [2024-12-09 05:31:09.426113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.448 [2024-12-09 05:31:09.426133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.448 [2024-12-09 05:31:09.426141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.448 [2024-12-09 05:31:09.426306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.448 [2024-12-09 05:31:09.426470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.448 [2024-12-09 05:31:09.426479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.448 [2024-12-09 05:31:09.426487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.448 [2024-12-09 05:31:09.426495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.448 5386.50 IOPS, 21.04 MiB/s [2024-12-09T04:31:09.445Z] [2024-12-09 05:31:09.438476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.439009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.439021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.439185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.439348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.439357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.439364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.439371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.451384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.451998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.452035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.452046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.452235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.452403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.452413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.452421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.452429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.464289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.464813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.464837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.464846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.465009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.465173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.465182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.465189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.465197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.477190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.477796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.477839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.477851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.478044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.478212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.478222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.478230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.478239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.490102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.490635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.490654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.490663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.490833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.490998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.491007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.491014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.491021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.503035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.503603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.503640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.503651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.503848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.504017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.504026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.504034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.504042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.515897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.516497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.516534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.516545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.516734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.516911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.516926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.516933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.516942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.528806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.529312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.529332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.529340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.710 [2024-12-09 05:31:09.529504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.710 [2024-12-09 05:31:09.529668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.710 [2024-12-09 05:31:09.529677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.710 [2024-12-09 05:31:09.529684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.710 [2024-12-09 05:31:09.529691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.710 [2024-12-09 05:31:09.541690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.710 [2024-12-09 05:31:09.542214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.710 [2024-12-09 05:31:09.542232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.710 [2024-12-09 05:31:09.542240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.542403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.542566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.542575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.542582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.542590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.554582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.555210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.555247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.555259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.555448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.555616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.555625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.555633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.555647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.567511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.568126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.568163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.568174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.568364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.568532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.568549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.568557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.568566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.580433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.580925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.580961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.580973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.581165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.581333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.581343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.581350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.581358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.593244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.593873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.593924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.594114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.594282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.594293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.594300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.594309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.606162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.606689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.606708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.606716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.606889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.607053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.607063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.607071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.607078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.619076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.619687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.619725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.619736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.619934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.620102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.620112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.620120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.620128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.631986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.632622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.632659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.632670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.632868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.633036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.633045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.633053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.633062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.645004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.645589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.645626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.645642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.645840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.646009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.646018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.646027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.646035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.657903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.658533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.658569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.711 [2024-12-09 05:31:09.658580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.711 [2024-12-09 05:31:09.658769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.711 [2024-12-09 05:31:09.658945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.711 [2024-12-09 05:31:09.658955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.711 [2024-12-09 05:31:09.658964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.711 [2024-12-09 05:31:09.658972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.711 [2024-12-09 05:31:09.670683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.711 [2024-12-09 05:31:09.671397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.711 [2024-12-09 05:31:09.671434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.712 [2024-12-09 05:31:09.671447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.712 [2024-12-09 05:31:09.671637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.712 [2024-12-09 05:31:09.671805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.712 [2024-12-09 05:31:09.671821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.712 [2024-12-09 05:31:09.671829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.712 [2024-12-09 05:31:09.671838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.712 [2024-12-09 05:31:09.683545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.712 [2024-12-09 05:31:09.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.712 [2024-12-09 05:31:09.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.712 [2024-12-09 05:31:09.684205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.712 [2024-12-09 05:31:09.684394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.712 [2024-12-09 05:31:09.684568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.712 [2024-12-09 05:31:09.684579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.712 [2024-12-09 05:31:09.684587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.712 [2024-12-09 05:31:09.684595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.712 [2024-12-09 05:31:09.696463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.712 [2024-12-09 05:31:09.697137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.712 [2024-12-09 05:31:09.697175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.712 [2024-12-09 05:31:09.697186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.712 [2024-12-09 05:31:09.697375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.712 [2024-12-09 05:31:09.697542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.712 [2024-12-09 05:31:09.697552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.712 [2024-12-09 05:31:09.697559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.712 [2024-12-09 05:31:09.697567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.973 [2024-12-09 05:31:09.709280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.973 [2024-12-09 05:31:09.709879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.973 [2024-12-09 05:31:09.709917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.973 [2024-12-09 05:31:09.709929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.973 [2024-12-09 05:31:09.710122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.710290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.710300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.710308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.710316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.722197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.722595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.722760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.722931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.722940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.722952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.722959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.735107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.735619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.735637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.735645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.735808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.735978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.735988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.735995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.736001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.748000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.748552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.748589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.748601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.748790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.748965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.748975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.748983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.748999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.760855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.761487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.761524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.761535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.761724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.761902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.761912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.761920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.761928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.773657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.774290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.774328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.774340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.774531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.774699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.774709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.774717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.774726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.786453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.786956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.786976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.786984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.787148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.787312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.787321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.787328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.787335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.799392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.800032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.800069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.800085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.800274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.800442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.800452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.800461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.800470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.812184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.812780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.974 [2024-12-09 05:31:09.812824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.974 [2024-12-09 05:31:09.812837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.974 [2024-12-09 05:31:09.813027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.974 [2024-12-09 05:31:09.813195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.974 [2024-12-09 05:31:09.813204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.974 [2024-12-09 05:31:09.813213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.974 [2024-12-09 05:31:09.813221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.974 [2024-12-09 05:31:09.825081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.974 [2024-12-09 05:31:09.825698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.825736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.825747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.825944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.826112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.826121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.826129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.826137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.838023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.838610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.838647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.838658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.838854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.839023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.839033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.839042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.839050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.850903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.851429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.851449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.851457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.851626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.851790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.851799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.851806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.851813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.863819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.864223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.864242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.864249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.864413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.864576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.864585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.864592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.864599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.876603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.877118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.877136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.877144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.877306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.877470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.877479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.877486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.877493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.889493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.889993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.890011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.890019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.890183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.890350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.890359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.890366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.890373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.902397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.902953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.902972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.902980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.903144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.903307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.903316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.903323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.903330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.915220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.915710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.915729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.915737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.915907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.916071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.916081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.916088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.916095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.928106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.928584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.975 [2024-12-09 05:31:09.928602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.975 [2024-12-09 05:31:09.928609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.975 [2024-12-09 05:31:09.928772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.975 [2024-12-09 05:31:09.928941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.975 [2024-12-09 05:31:09.928951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.975 [2024-12-09 05:31:09.928961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.975 [2024-12-09 05:31:09.928968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.975 [2024-12-09 05:31:09.940983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.975 [2024-12-09 05:31:09.941605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.976 [2024-12-09 05:31:09.941642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.976 [2024-12-09 05:31:09.941654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.976 [2024-12-09 05:31:09.941851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.976 [2024-12-09 05:31:09.942020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.976 [2024-12-09 05:31:09.942030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.976 [2024-12-09 05:31:09.942038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.976 [2024-12-09 05:31:09.942046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.976 [2024-12-09 05:31:09.953920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.976 [2024-12-09 05:31:09.954401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.976 [2024-12-09 05:31:09.954421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:55.976 [2024-12-09 05:31:09.954429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:55.976 [2024-12-09 05:31:09.954593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:55.976 [2024-12-09 05:31:09.954758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.976 [2024-12-09 05:31:09.954767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.976 [2024-12-09 05:31:09.954774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.976 [2024-12-09 05:31:09.954781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.237 [2024-12-09 05:31:09.966805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.237 [2024-12-09 05:31:09.967305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.237 [2024-12-09 05:31:09.967323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.237 [2024-12-09 05:31:09.967332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.237 [2024-12-09 05:31:09.967495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.237 [2024-12-09 05:31:09.967659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.237 [2024-12-09 05:31:09.967668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.237 [2024-12-09 05:31:09.967676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.237 [2024-12-09 05:31:09.967683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.237 [2024-12-09 05:31:09.979702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.237 [2024-12-09 05:31:09.980199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.237 [2024-12-09 05:31:09.980218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.237 [2024-12-09 05:31:09.980226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.237 [2024-12-09 05:31:09.980389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.237 [2024-12-09 05:31:09.980553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.237 [2024-12-09 05:31:09.980561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.237 [2024-12-09 05:31:09.980568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.237 [2024-12-09 05:31:09.980575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.237 [2024-12-09 05:31:09.992591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.237 [2024-12-09 05:31:09.993160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.237 [2024-12-09 05:31:09.993197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:09.993208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:09.993409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:09.993577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:09.993588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:09.993595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:09.993604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.005509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.005924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.005962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.005975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.006169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.006337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.006348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.006356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.006365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.018345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.018874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.018915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.018927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.019119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.019287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.019297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.019305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.019313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.031198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.031836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.031874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.031886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.032078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.032246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.032257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.032265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.032273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.044133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.044567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.044587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.044595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.044760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.044931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.044941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.044949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.044956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.056981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.057504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.057522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.057532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.057700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.057869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.057880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.057887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.057895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.069771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.070318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.070337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.070346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.070509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.070673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.070682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.070689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.070696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.082564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.083064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.083083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.083090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.083254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.083418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.083428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.083436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.083442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.095479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.095967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.095986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.095995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.096160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.096325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.096339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.096347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.096354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.108378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.108885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.108903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.108911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.109075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.109239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.109249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.109256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.109263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.121306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.121843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.121860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.121875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.122040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.122205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.122214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.122221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.122228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.134100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.134599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.134625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.134790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.134960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.134970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.134978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.134990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.147015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.147572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.147611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.147624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.147824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.147995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.148005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.148014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.148022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.159912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.160379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.160399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.160408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.160573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.160739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.160748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.160756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.160763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.172811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.173349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.173368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.173376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.173539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.173704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.173713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.173721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.173729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.185607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.186098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.186116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.186124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.186288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.186452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.186461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.186469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.186476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.198471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.198995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.199004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.199169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.199335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.199344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.199353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.199361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.211353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.212050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.212088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.212100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.212290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.212458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.212468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.212477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.238 [2024-12-09 05:31:10.212485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.238 [2024-12-09 05:31:10.224233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.238 [2024-12-09 05:31:10.224784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.238 [2024-12-09 05:31:10.224805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.238 [2024-12-09 05:31:10.224825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.238 [2024-12-09 05:31:10.224991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.238 [2024-12-09 05:31:10.225155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.238 [2024-12-09 05:31:10.225166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.238 [2024-12-09 05:31:10.225173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.239 [2024-12-09 05:31:10.225181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.500 [2024-12-09 05:31:10.237174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.500 [2024-12-09 05:31:10.237697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.500 [2024-12-09 05:31:10.237715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.500 [2024-12-09 05:31:10.237723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.500 [2024-12-09 05:31:10.237895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.500 [2024-12-09 05:31:10.238060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.500 [2024-12-09 05:31:10.238069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.500 [2024-12-09 05:31:10.238077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.500 [2024-12-09 05:31:10.238084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.500 [2024-12-09 05:31:10.250059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.500 [2024-12-09 05:31:10.250575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.500 [2024-12-09 05:31:10.250593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.500 [2024-12-09 05:31:10.250601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.500 [2024-12-09 05:31:10.250765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.500 [2024-12-09 05:31:10.250937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.500 [2024-12-09 05:31:10.250949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.500 [2024-12-09 05:31:10.250961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.500 [2024-12-09 05:31:10.250973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.500 [2024-12-09 05:31:10.262960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.500 [2024-12-09 05:31:10.263466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.500 [2024-12-09 05:31:10.263484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.500 [2024-12-09 05:31:10.263493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.500 [2024-12-09 05:31:10.263657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.500 [2024-12-09 05:31:10.263836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.500 [2024-12-09 05:31:10.263846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.500 [2024-12-09 05:31:10.263854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.500 [2024-12-09 05:31:10.263861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.500 [2024-12-09 05:31:10.275842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.500 [2024-12-09 05:31:10.276367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.500 [2024-12-09 05:31:10.276385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.500 [2024-12-09 05:31:10.276393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.500 [2024-12-09 05:31:10.276557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.500 [2024-12-09 05:31:10.276727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.500 [2024-12-09 05:31:10.276738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.500 [2024-12-09 05:31:10.276745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.500 [2024-12-09 05:31:10.276752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.500 [2024-12-09 05:31:10.288738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.500 [2024-12-09 05:31:10.289261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.500 [2024-12-09 05:31:10.289278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.500 [2024-12-09 05:31:10.289286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.500 [2024-12-09 05:31:10.289452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.500 [2024-12-09 05:31:10.289623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.500 [2024-12-09 05:31:10.289633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.500 [2024-12-09 05:31:10.289641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.289648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.301639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.302131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.302149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.302158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.302331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.302504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.302516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.302523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.302530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.314508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.315033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.315052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.315059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.315233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.315398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.315407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.315414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.315421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.327408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.327913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.327931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.327939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.328109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.328274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.328283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.328291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.328298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.340275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.340791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.340809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.340822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.340986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.341150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.341159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.341166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.341177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.353158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.353672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.353691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.353699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.353870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.354035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.354044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.354052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.354060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.366034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.366542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.366569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.366734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.366912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.366923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.366930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.366938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.378928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.379405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.379423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.379431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.379595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.379759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.379769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.379776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.379784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.391760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.392169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.392187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.392196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.392360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.392524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.392533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.392541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.392548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.404691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.405221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.405247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.405411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.501 [2024-12-09 05:31:10.405575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.501 [2024-12-09 05:31:10.405584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.501 [2024-12-09 05:31:10.405591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.501 [2024-12-09 05:31:10.405598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.501 [2024-12-09 05:31:10.417571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.501 [2024-12-09 05:31:10.418075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.501 [2024-12-09 05:31:10.418094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.501 [2024-12-09 05:31:10.418102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.501 [2024-12-09 05:31:10.418265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.418429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.418438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.418446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.418452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.502 [2024-12-09 05:31:10.430450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.502 [2024-12-09 05:31:10.430858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.502 [2024-12-09 05:31:10.430876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.502 [2024-12-09 05:31:10.430887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.502 [2024-12-09 05:31:10.431051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.431214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.431223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.431231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.431237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.502 4309.20 IOPS, 16.83 MiB/s [2024-12-09T04:31:10.499Z] [2024-12-09 05:31:10.443359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.502 [2024-12-09 05:31:10.443828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.502 [2024-12-09 05:31:10.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.502 [2024-12-09 05:31:10.443854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.502 [2024-12-09 05:31:10.444017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.444181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.444190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.444197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.444204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.502 [2024-12-09 05:31:10.456196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.502 [2024-12-09 05:31:10.456675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.502 [2024-12-09 05:31:10.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.502 [2024-12-09 05:31:10.456702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.502 [2024-12-09 05:31:10.456870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.457035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.457045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.457053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.457061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.502 [2024-12-09 05:31:10.469035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.502 [2024-12-09 05:31:10.469549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.502 [2024-12-09 05:31:10.469566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.502 [2024-12-09 05:31:10.469574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.502 [2024-12-09 05:31:10.469738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.469920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.469932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.469940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.469947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.502 [2024-12-09 05:31:10.481916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.502 [2024-12-09 05:31:10.482530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.502 [2024-12-09 05:31:10.482567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.502 [2024-12-09 05:31:10.482579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.502 [2024-12-09 05:31:10.482779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.502 [2024-12-09 05:31:10.482958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.502 [2024-12-09 05:31:10.482969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.502 [2024-12-09 05:31:10.482984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.502 [2024-12-09 05:31:10.482994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.494859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.495478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.495515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.495527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.495726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.495903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.495914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.495922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.495931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.507757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.508293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.508311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.508320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.508494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.508660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.508669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.508687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.508694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.520677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.521171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.521191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.521200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.521376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.521541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.521551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.521558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.521566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.533541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.534025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.534044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.534053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.534225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.534390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.534399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.534407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.534415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.546430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.546905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.546925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.546937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.547110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.547274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.547284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.547291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.547298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 [2024-12-09 05:31:10.559249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.559766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.559802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.559821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 [2024-12-09 05:31:10.560015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.560183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.560194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.560203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.560213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1822494 Killed "${NVMF_APP[@]}" "$@" 00:37:56.766 [2024-12-09 05:31:10.572188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.766 [2024-12-09 05:31:10.572564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.766 [2024-12-09 05:31:10.572587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.766 [2024-12-09 05:31:10.572597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.766 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:56.766 [2024-12-09 05:31:10.572766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.766 [2024-12-09 05:31:10.572938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.766 [2024-12-09 05:31:10.572949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.766 [2024-12-09 05:31:10.572957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.766 [2024-12-09 05:31:10.572964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.766 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:56.766 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.766 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1824310 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1824310 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1824310 ']' 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.767 05:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:56.767 [2024-12-09 05:31:10.585092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.585545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.585565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.585573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.585737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.585907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.585917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.585925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.585933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.597913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.598429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.598448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.598456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.598620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.598784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.598794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.598802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.598810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.610796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.611377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.611415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.611426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.611616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.611784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.611794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.611807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.611824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.623659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.624207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.624228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.624237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.624402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.624566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.624576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.624583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.624590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.636565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.637097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.637147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.637338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.637506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.637516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.637524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.637532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.649360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.649732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.649751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.649760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.649931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.650096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.650105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.650114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.650121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.662281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.662928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.662969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.662981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.663172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.663341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.663350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.663358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.663367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.665570] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:56.767 [2024-12-09 05:31:10.665667] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.767 [2024-12-09 05:31:10.675206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.675807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.675850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.675862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.767 [2024-12-09 05:31:10.676053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.767 [2024-12-09 05:31:10.676222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.767 [2024-12-09 05:31:10.676232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.767 [2024-12-09 05:31:10.676240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.767 [2024-12-09 05:31:10.676248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.767 [2024-12-09 05:31:10.688128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.767 [2024-12-09 05:31:10.688672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.767 [2024-12-09 05:31:10.688710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.767 [2024-12-09 05:31:10.688721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.688918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.689088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.689098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.689107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.689118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.768 [2024-12-09 05:31:10.701033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.768 [2024-12-09 05:31:10.701437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.768 [2024-12-09 05:31:10.701457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.768 [2024-12-09 05:31:10.701467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.701634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.701801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.701811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.701824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.701831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.768 [2024-12-09 05:31:10.713906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.768 [2024-12-09 05:31:10.714528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.768 [2024-12-09 05:31:10.714564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.768 [2024-12-09 05:31:10.714576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.714767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.714943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.714954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.714963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.714971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.768 [2024-12-09 05:31:10.726694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.768 [2024-12-09 05:31:10.727186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.768 [2024-12-09 05:31:10.727224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.768 [2024-12-09 05:31:10.727236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.727425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.727594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.727604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.727613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.727622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.768 [2024-12-09 05:31:10.739495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.768 [2024-12-09 05:31:10.740074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.768 [2024-12-09 05:31:10.740112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.768 [2024-12-09 05:31:10.740123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.740317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.740486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.740497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.740506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.740515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:56.768 [2024-12-09 05:31:10.752404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:56.768 [2024-12-09 05:31:10.752917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.768 [2024-12-09 05:31:10.752938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:56.768 [2024-12-09 05:31:10.752947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:56.768 [2024-12-09 05:31:10.753112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:56.768 [2024-12-09 05:31:10.753277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:56.768 [2024-12-09 05:31:10.753286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:56.768 [2024-12-09 05:31:10.753294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:56.768 [2024-12-09 05:31:10.753301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.765318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.765900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.765937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.765949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.766140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.766309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.766320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.766328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.766337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.778222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.778732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.778752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.778760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.778931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.779100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.779110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.779117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.779124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.791140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.791764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.791801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.791813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.792009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.792177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.792187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.792195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.792205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.803941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.804522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.804559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.804571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.804760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.804936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.804947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.804955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.804964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.810621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:57.030 [2024-12-09 05:31:10.816838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.817321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.817370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.817560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.817729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.817740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.817755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.817764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.829639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.830154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.830174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.830183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.830349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.830514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.830523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.830532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.830539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.842554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.843173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.843211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.843223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.843412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.843580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.843591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.843599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.843608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.855487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.856194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.030 [2024-12-09 05:31:10.856205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.030 [2024-12-09 05:31:10.856395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.030 [2024-12-09 05:31:10.856564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.030 [2024-12-09 05:31:10.856574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.030 [2024-12-09 05:31:10.856582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.030 [2024-12-09 05:31:10.856594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.030 [2024-12-09 05:31:10.868327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.030 [2024-12-09 05:31:10.868928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.030 [2024-12-09 05:31:10.868966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.868978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.869169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.869338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.869349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.869357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.869367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.881253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.881920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.881932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.882125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.882296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.882308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.882316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.882325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.886667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.031 [2024-12-09 05:31:10.886695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.031 [2024-12-09 05:31:10.886704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.031 [2024-12-09 05:31:10.886713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.031 [2024-12-09 05:31:10.886720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.031 [2024-12-09 05:31:10.888388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.031 [2024-12-09 05:31:10.888481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.031 [2024-12-09 05:31:10.888507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:57.031 [2024-12-09 05:31:10.894067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.894497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.894517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.894526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.894696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.894867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.894879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.894887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.894895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.906938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.907466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.907484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.907493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.907658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.907827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.907838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.907845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.907852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.919867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.920418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.920456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.920467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.920660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.920837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.920848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.920857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.920867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.932718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.933259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.933297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.933310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.933503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.933672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.933685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.933695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.933704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.945610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.946237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.946274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.946286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.946477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.946646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.946656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.946664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.946673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.958418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.958949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.958986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.958998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.031 [2024-12-09 05:31:10.959189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.031 [2024-12-09 05:31:10.959356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.031 [2024-12-09 05:31:10.959367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.031 [2024-12-09 05:31:10.959376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.031 [2024-12-09 05:31:10.959385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.031 [2024-12-09 05:31:10.971283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.031 [2024-12-09 05:31:10.971835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.031 [2024-12-09 05:31:10.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.031 [2024-12-09 05:31:10.971864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.032 [2024-12-09 05:31:10.972030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.032 [2024-12-09 05:31:10.972195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.032 [2024-12-09 05:31:10.972204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.032 [2024-12-09 05:31:10.972212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.032 [2024-12-09 05:31:10.972223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.032 [2024-12-09 05:31:10.984094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.032 [2024-12-09 05:31:10.984567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.032 [2024-12-09 05:31:10.984605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.032 [2024-12-09 05:31:10.984616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.032 [2024-12-09 05:31:10.984807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.032 [2024-12-09 05:31:10.984983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.032 [2024-12-09 05:31:10.984994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.032 [2024-12-09 05:31:10.985001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.032 [2024-12-09 05:31:10.985010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.032 [2024-12-09 05:31:10.996903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.032 [2024-12-09 05:31:10.997501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.032 [2024-12-09 05:31:10.997539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.032 [2024-12-09 05:31:10.997550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.032 [2024-12-09 05:31:10.997740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.032 [2024-12-09 05:31:10.997917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.032 [2024-12-09 05:31:10.997928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.032 [2024-12-09 05:31:10.997936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.032 [2024-12-09 05:31:10.997945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.032 [2024-12-09 05:31:11.009817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.032 [2024-12-09 05:31:11.010495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.032 [2024-12-09 05:31:11.010533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.032 [2024-12-09 05:31:11.010545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.032 [2024-12-09 05:31:11.010735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.032 [2024-12-09 05:31:11.010912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.032 [2024-12-09 05:31:11.010923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.032 [2024-12-09 05:31:11.010932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.032 [2024-12-09 05:31:11.010941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.295 [2024-12-09 05:31:11.022664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.295 [2024-12-09 05:31:11.023098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.295 [2024-12-09 05:31:11.023117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.295 [2024-12-09 05:31:11.023126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.295 [2024-12-09 05:31:11.023292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.295 [2024-12-09 05:31:11.023457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.295 [2024-12-09 05:31:11.023466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.295 [2024-12-09 05:31:11.023474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.295 [2024-12-09 05:31:11.023481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.295 [2024-12-09 05:31:11.035500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.295 [2024-12-09 05:31:11.036153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.295 [2024-12-09 05:31:11.036192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.295 [2024-12-09 05:31:11.036203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.295 [2024-12-09 05:31:11.036414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.295 [2024-12-09 05:31:11.036582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.295 [2024-12-09 05:31:11.036594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.295 [2024-12-09 05:31:11.036601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.295 [2024-12-09 05:31:11.036610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.295 [2024-12-09 05:31:11.048341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.048958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.048970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.049162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.049331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.049343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.049350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.049360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.061242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.061914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.061952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.061968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.062158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.062326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.062337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.062344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.062353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.074094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.074741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.074778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.074790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.074988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.075157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.075169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.075177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.075187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.086906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.087541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.087553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.087743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.087918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.087929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.087937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.087946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.099841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.100444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.100482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.100493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.100684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.100864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.100876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.100884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.100893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.112752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.113326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.113364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.113376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.113565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.113733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.113743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.113752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.113760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.125646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.126251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.126289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.126300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.126490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.126658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.126668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.126676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.126685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.138553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.139067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.139087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.139096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.139260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.139425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.139434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.139445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.139453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.151467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.151963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.151981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.151989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.296 [2024-12-09 05:31:11.152153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.296 [2024-12-09 05:31:11.152316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.296 [2024-12-09 05:31:11.152325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.296 [2024-12-09 05:31:11.152332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.296 [2024-12-09 05:31:11.152339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.296 [2024-12-09 05:31:11.164355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.296 [2024-12-09 05:31:11.164729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.296 [2024-12-09 05:31:11.164747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.296 [2024-12-09 05:31:11.164755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.164924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.165088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.165097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.165105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.165112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.177135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.177536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.177544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.177708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.177877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.177887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.177895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.177902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.189983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.190376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.190394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.190402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.190565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.190729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.190738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.190746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.190753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.202784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.203318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.203336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.203344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.203507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.203671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.203680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.203688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.203695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.215562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.216160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.216198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.216210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.216399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.216567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.216585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.216593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.216602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.228352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.229027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.229065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.229076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.229266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.229434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.229444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.229452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.229461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.241192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.241715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.241727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.241922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.242091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.242100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.242109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.242117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.254133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.254671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.254691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.254699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.254869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.255035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.255045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.255052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.255059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.267065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.267695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.267733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.267745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.267945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.268114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.268124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.268132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.268142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.297 [2024-12-09 05:31:11.279860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.297 [2024-12-09 05:31:11.280267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.297 [2024-12-09 05:31:11.280286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.297 [2024-12-09 05:31:11.280295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.297 [2024-12-09 05:31:11.280459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.297 [2024-12-09 05:31:11.280624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.297 [2024-12-09 05:31:11.280633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.297 [2024-12-09 05:31:11.280641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.297 [2024-12-09 05:31:11.280648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.292663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.293254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.293292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.293303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.560 [2024-12-09 05:31:11.293492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.560 [2024-12-09 05:31:11.293661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.560 [2024-12-09 05:31:11.293671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.560 [2024-12-09 05:31:11.293679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.560 [2024-12-09 05:31:11.293688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.305575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.306196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.306234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.306246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.560 [2024-12-09 05:31:11.306437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.560 [2024-12-09 05:31:11.306608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.560 [2024-12-09 05:31:11.306620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.560 [2024-12-09 05:31:11.306628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.560 [2024-12-09 05:31:11.306637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.318367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.319031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.319068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.319079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.560 [2024-12-09 05:31:11.319269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.560 [2024-12-09 05:31:11.319438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.560 [2024-12-09 05:31:11.319447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.560 [2024-12-09 05:31:11.319456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.560 [2024-12-09 05:31:11.319464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.331195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.331706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.331743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.331756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.560 [2024-12-09 05:31:11.331953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.560 [2024-12-09 05:31:11.332122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.560 [2024-12-09 05:31:11.332132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.560 [2024-12-09 05:31:11.332141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.560 [2024-12-09 05:31:11.332150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.344025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.344634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.344672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.344683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.560 [2024-12-09 05:31:11.344880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.560 [2024-12-09 05:31:11.345049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.560 [2024-12-09 05:31:11.345060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.560 [2024-12-09 05:31:11.345075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.560 [2024-12-09 05:31:11.345084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.560 [2024-12-09 05:31:11.356953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.560 [2024-12-09 05:31:11.357221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.560 [2024-12-09 05:31:11.357247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.560 [2024-12-09 05:31:11.357257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.357431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.357596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.357607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.357615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.357623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 [2024-12-09 05:31:11.369788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.370316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.370335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.370343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.370507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.370672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.370681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.370689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.370696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 [2024-12-09 05:31:11.382566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.383192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.383230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.383241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.383431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.383599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.383610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.383618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.383627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 [2024-12-09 05:31:11.395495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.396143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.396154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.396343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.396512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.396522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.396530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.396546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 [2024-12-09 05:31:11.408425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.409035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.409073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.409085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.409275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.409442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.409462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.409470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.409479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 [2024-12-09 05:31:11.421342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.421897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.421935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.421947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.422139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.422307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.422319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.422327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.422335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.561 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:57.561 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.561 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.561 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.561 [2024-12-09 05:31:11.434203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.434715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.434734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.561 [2024-12-09 05:31:11.434743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.561 [2024-12-09 05:31:11.434912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.561 [2024-12-09 05:31:11.435076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.561 [2024-12-09 05:31:11.435086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.561 [2024-12-09 05:31:11.435094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.561 [2024-12-09 05:31:11.435101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.561 3591.00 IOPS, 14.03 MiB/s [2024-12-09T04:31:11.558Z] [2024-12-09 05:31:11.447091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.561 [2024-12-09 05:31:11.447597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.561 [2024-12-09 05:31:11.447616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.447624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.447787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.447956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.447966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.447973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.447980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 [2024-12-09 05:31:11.459990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 [2024-12-09 05:31:11.460466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.460484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.460492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.460656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.460825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.460835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.460843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.460850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 [2024-12-09 05:31:11.472873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 [2024-12-09 05:31:11.473329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.473366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.473378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.473568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.562 [2024-12-09 05:31:11.473737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.473748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.473756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.473765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.562 [2024-12-09 05:31:11.479411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:57.562 [2024-12-09 05:31:11.485786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.562 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.562 [2024-12-09 05:31:11.486285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.486321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.486332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.486521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.486689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.486699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.486709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.486718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 [2024-12-09 05:31:11.498609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 [2024-12-09 05:31:11.499214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.499252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.499264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.499453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.499624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.499636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.499644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.499653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 [2024-12-09 05:31:11.511525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 [2024-12-09 05:31:11.512005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.512044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.512056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.512249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.512418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.562 [2024-12-09 05:31:11.512428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.562 [2024-12-09 05:31:11.512436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.562 [2024-12-09 05:31:11.512445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.562 [2024-12-09 05:31:11.524325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.562 [2024-12-09 05:31:11.524699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.562 [2024-12-09 05:31:11.524719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.562 [2024-12-09 05:31:11.524727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.562 [2024-12-09 05:31:11.524898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.562 [2024-12-09 05:31:11.525063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.563 [2024-12-09 05:31:11.525073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.563 [2024-12-09 05:31:11.525081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.563 [2024-12-09 05:31:11.525089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.563 [2024-12-09 05:31:11.537286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.563 [2024-12-09 05:31:11.537770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.563 [2024-12-09 05:31:11.537788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.563 [2024-12-09 05:31:11.537797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.563 [2024-12-09 05:31:11.537965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.563 [2024-12-09 05:31:11.538130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.563 [2024-12-09 05:31:11.538140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.563 [2024-12-09 05:31:11.538152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.563 [2024-12-09 05:31:11.538160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.563 Malloc0 00:37:57.563 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.563 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:57.563 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.563 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.563 [2024-12-09 05:31:11.550175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.563 [2024-12-09 05:31:11.550697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.563 [2024-12-09 05:31:11.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.563 [2024-12-09 05:31:11.550723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.563 [2024-12-09 05:31:11.550893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.563 [2024-12-09 05:31:11.551059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.563 [2024-12-09 05:31:11.551068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.563 [2024-12-09 05:31:11.551075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.563 [2024-12-09 05:31:11.551082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.823 [2024-12-09 05:31:11.563092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.823 [2024-12-09 05:31:11.563581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:57.823 [2024-12-09 05:31:11.563619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393800 with addr=10.0.0.2, port=4420 00:37:57.823 [2024-12-09 05:31:11.563631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393800 is same with the state(6) to be set 00:37:57.823 [2024-12-09 05:31:11.563829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:37:57.823 [2024-12-09 05:31:11.563998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:57.823 [2024-12-09 05:31:11.564008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:57.823 [2024-12-09 05:31:11.564016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:57.823 [2024-12-09 05:31:11.564025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.823 [2024-12-09 05:31:11.572393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.823 [2024-12-09 05:31:11.575916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.823 05:31:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1823084 00:37:57.823 [2024-12-09 05:31:11.601552] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:59.759 4487.57 IOPS, 17.53 MiB/s [2024-12-09T04:31:14.776Z] 5408.12 IOPS, 21.13 MiB/s [2024-12-09T04:31:15.714Z] 6117.11 IOPS, 23.89 MiB/s [2024-12-09T04:31:16.672Z] 6678.60 IOPS, 26.09 MiB/s [2024-12-09T04:31:17.610Z] 7150.91 IOPS, 27.93 MiB/s [2024-12-09T04:31:18.550Z] 7530.25 IOPS, 29.42 MiB/s [2024-12-09T04:31:19.492Z] 7856.15 IOPS, 30.69 MiB/s [2024-12-09T04:31:20.875Z] 8140.00 IOPS, 31.80 MiB/s 00:38:06.878 Latency(us) 00:38:06.878 [2024-12-09T04:31:20.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:06.878 Verification LBA range: start 0x0 length 0x4000 00:38:06.878 Nvme1n1 : 15.01 8376.01 32.72 13123.21 0.00 5934.59 624.64 28398.93 00:38:06.878 [2024-12-09T04:31:20.875Z] =================================================================================================================== 00:38:06.878 [2024-12-09T04:31:20.875Z] Total : 8376.01 32.72 13123.21 0.00 5934.59 624.64 28398.93 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:07.139 rmmod nvme_tcp 00:38:07.139 rmmod nvme_fabrics 00:38:07.139 rmmod nvme_keyring 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1824310 ']' 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1824310 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1824310 ']' 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1824310 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.139 05:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1824310 00:38:07.139 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:07.139 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:07.139 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1824310' 00:38:07.139 killing process with pid 1824310 00:38:07.139 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1824310 00:38:07.139 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1824310 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:08.079 05:31:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.988 05:31:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:09.989 00:38:09.989 real 0m30.336s 00:38:09.989 user 1m9.962s 00:38:09.989 sys 0m7.995s 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.989 ************************************ 00:38:09.989 END TEST nvmf_bdevperf 00:38:09.989 ************************************ 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.989 ************************************ 00:38:09.989 START TEST nvmf_target_disconnect 00:38:09.989 ************************************ 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:09.989 * Looking for test storage... 00:38:09.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:09.989 05:31:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:10.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.250 --rc genhtml_branch_coverage=1 00:38:10.250 --rc genhtml_function_coverage=1 00:38:10.250 --rc genhtml_legend=1 00:38:10.250 --rc geninfo_all_blocks=1 00:38:10.250 --rc geninfo_unexecuted_blocks=1 00:38:10.250 00:38:10.250 ' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:10.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.250 --rc genhtml_branch_coverage=1 00:38:10.250 --rc genhtml_function_coverage=1 00:38:10.250 --rc genhtml_legend=1 00:38:10.250 --rc geninfo_all_blocks=1 00:38:10.250 --rc geninfo_unexecuted_blocks=1 00:38:10.250 00:38:10.250 ' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:10.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.250 --rc genhtml_branch_coverage=1 00:38:10.250 --rc genhtml_function_coverage=1 00:38:10.250 --rc genhtml_legend=1 00:38:10.250 --rc geninfo_all_blocks=1 00:38:10.250 --rc geninfo_unexecuted_blocks=1 00:38:10.250 00:38:10.250 ' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:10.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.250 --rc genhtml_branch_coverage=1 00:38:10.250 --rc genhtml_function_coverage=1 00:38:10.250 --rc genhtml_legend=1 00:38:10.250 --rc geninfo_all_blocks=1 00:38:10.250 --rc geninfo_unexecuted_blocks=1 00:38:10.250 00:38:10.250 ' 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:10.250 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:10.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:10.251 05:31:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:18.394 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:18.394 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:18.394 Found net devices under 0000:31:00.0: cvl_0_0 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:18.394 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:18.395 Found net devices under 0000:31:00.1: cvl_0_1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:18.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:18.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:38:18.395 00:38:18.395 --- 10.0.0.2 ping statistics --- 00:38:18.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.395 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:18.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:18.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:18.395 00:38:18.395 --- 10.0.0.1 ping statistics --- 00:38:18.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.395 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:18.395 ************************************ 00:38:18.395 START TEST nvmf_target_disconnect_tc1 00:38:18.395 ************************************ 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:18.395 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:18.395 [2024-12-09 05:31:31.916572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.395 [2024-12-09 05:31:31.916678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000393580 with addr=10.0.0.2, port=4420 00:38:18.395 [2024-12-09 05:31:31.916748] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:18.395 [2024-12-09 05:31:31.916765] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:18.396 [2024-12-09 05:31:31.916781] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:18.396 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:18.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:18.396 Initializing NVMe Controllers 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:18.396 00:38:18.396 real 0m0.262s 00:38:18.396 user 0m0.106s 00:38:18.396 sys 0m0.150s 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:18.396 ************************************ 00:38:18.396 END TEST nvmf_target_disconnect_tc1 00:38:18.396 ************************************ 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.396 05:31:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:18.396 ************************************ 00:38:18.396 START TEST nvmf_target_disconnect_tc2 00:38:18.396 ************************************ 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1830546 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1830546 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1830546 ']' 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.396 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:18.396 [2024-12-09 05:31:32.151178] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:18.396 [2024-12-09 05:31:32.151305] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.396 [2024-12-09 05:31:32.316605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:18.658 [2024-12-09 05:31:32.447703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.658 [2024-12-09 05:31:32.447774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.658 [2024-12-09 05:31:32.447787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.658 [2024-12-09 05:31:32.447800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.658 [2024-12-09 05:31:32.447810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.658 [2024-12-09 05:31:32.450952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:18.658 [2024-12-09 05:31:32.451295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:18.658 [2024-12-09 05:31:32.451407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:18.658 [2024-12-09 05:31:32.451425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:19.231 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 Malloc0 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 [2024-12-09 05:31:33.059153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 [2024-12-09 05:31:33.101337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1830777 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:19.232 05:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:21.152 05:31:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1830546 00:38:21.152 05:31:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 [2024-12-09 05:31:35.143451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Write completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 [2024-12-09 05:31:35.143964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.152 starting I/O failed 00:38:21.152 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Write completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 Read completed with error (sct=0, sc=8) 00:38:21.153 starting I/O failed 00:38:21.153 [2024-12-09 05:31:35.144379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.153 [2024-12-09 05:31:35.144795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.153 [2024-12-09 05:31:35.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.153 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.145365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.145414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.145806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.145835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.146295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.146342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.146728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.146747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.147180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.147227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.147552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.147571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.427 [2024-12-09 05:31:35.148030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.427 [2024-12-09 05:31:35.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.427 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.148464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.148483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.148811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.148834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.149092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.149108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.149300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.149317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.149562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.149884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.149898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.150325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.150340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.150695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.150713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.151096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.151111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.151296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.151311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.151497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.151513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.151755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.151769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.152107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.152122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.152447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.152461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.152798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.152813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.153027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.153041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.153418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.153432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.153746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.153932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.153947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.154251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.154265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.154544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.154557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.156625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.156641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.157087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.157134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.157485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.157503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.157704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.157718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.158042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.158057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.158376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.158391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.158724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.158738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.158887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.158901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.159286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.159300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.159533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.159548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.159698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.159712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.160010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.160025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.160332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.160346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.160637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.160652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.161043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.161057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.161403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.428 [2024-12-09 05:31:35.161418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.428 qpair failed and we were unable to recover it. 00:38:21.428 [2024-12-09 05:31:35.161680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.161693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.161996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.162012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.162210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.162224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.162543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.162557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.162738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.163060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.163075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.163373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.163386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.163687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.163701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.164048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.164062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.164412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.164744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.164762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.165089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.165104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.165252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.165267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.165601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.165616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.165935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.165949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.166234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.166248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.166569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.166582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.166682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.166697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.166998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.167019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.167356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.167369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.167706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.167719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.168073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.168092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.168385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.168402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.168714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.169034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.169052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.169386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.169404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.169720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.169737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.169979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.169997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.170197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.170214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.170502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.170520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.170877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.170895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.171148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.171165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.171471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.171488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.171876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.171895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.172227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.172244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.172577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.172594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.172962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.172980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.429 qpair failed and we were unable to recover it. 00:38:21.429 [2024-12-09 05:31:35.173296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.429 [2024-12-09 05:31:35.173314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.173606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.173624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.173923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.173940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.174273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.174289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.174626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.174644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.174978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.174996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.175190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.175207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.175523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.175540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.175865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.175884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.176174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.176191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.176478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.176495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.176832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.176850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.177195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.177213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.177555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.177575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.177942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.177960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.178220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.178495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.178512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.178712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.178729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.179073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.179098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.179431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.179454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.179679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.179994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.180018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.180370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.180716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.180740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.181109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.181403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.181426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.181740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.181764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.182124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.182469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.182493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.182712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.182735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.183061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.183084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.183372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.183394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.183733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.183761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.184009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.184274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.184299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.430 [2024-12-09 05:31:35.184634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.430 [2024-12-09 05:31:35.184658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.430 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.184855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.184879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.185244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.185267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.185588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.185611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.185936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.185960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.186171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.186197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.186379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.186404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.186732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.186755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.187063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.187088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.187462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.187486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.187840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.187873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.188164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.188188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.188479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.188502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.188845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.188871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.189132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.189156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.189476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.189499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.189811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.189842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.190141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.190165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.190520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.190876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.190906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.191203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.191232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.191569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.191599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.191800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.191841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.192104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.192132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.192388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.192420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.192744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.192774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.193126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.193156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.193468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.193497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.193854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.193885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.194224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.194254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.194607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.194637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.194970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.195291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.195321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.195669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.195699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.196035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.196065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.196391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.431 [2024-12-09 05:31:35.196420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.431 qpair failed and we were unable to recover it. 00:38:21.431 [2024-12-09 05:31:35.196729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.196758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.197094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.197125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.197451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.197481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.197800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.197837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.198164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.198193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.198533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.198563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.198903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.199138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.199169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.199413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.199445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.199841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.199872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.200073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.200104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.200450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.200480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.200896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.200926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.201255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.201285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.201638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.201668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.201970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.202000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.202335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.202365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.202536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.202567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.202920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.203304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.203344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.203712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.203752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.204020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.204063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.204414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.204455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.204801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.204853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.205147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.205187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.205531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.205572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.205951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.205993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.206354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.206393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.206722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.206763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.207125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.207168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.207493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.207532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.432 [2024-12-09 05:31:35.207901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.432 [2024-12-09 05:31:35.207942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.432 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.208298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.208339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.208700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.208741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.209052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.209093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.209451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.209491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.209740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.209779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.210180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.210223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.210480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.210589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.211024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.211067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.211433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.211472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.211852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.211896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.212256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.212296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.212585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.212626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.212988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.213031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.213402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.213443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.213827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.213869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.214229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.214269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.214638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.214677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.215105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.215154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.215518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.215558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.215932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.215973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.216200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.216240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.216636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.216678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.217034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.217075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.217443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.217482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.217830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.217871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.218132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.218175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.218434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.218474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.218807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.218857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.219111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.219153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.219422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.219467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.219856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.219899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.220268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.220308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.220661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.433 [2024-12-09 05:31:35.220701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.433 qpair failed and we were unable to recover it. 00:38:21.433 [2024-12-09 05:31:35.221071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.221113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.221509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.221548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.221910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.222293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.222333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.222710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.222751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.223167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.223208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.223576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.223616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.224031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.224081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.224504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.224762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.224806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.225209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.225250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.225612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.225653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.225932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.225978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.226359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.226400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.226706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.226746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.227128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.227171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.227541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.227582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.227944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.227987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.228349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.228389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.228768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.228807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.229190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.229232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.229597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.229635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.229974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.230016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.230278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.230320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.230705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.230758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.231164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.231564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.231874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.231914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.232274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.232315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.232679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.232719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.233113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.233154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.233515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.233554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.233923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.233966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.234308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.234715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.434 [2024-12-09 05:31:35.234755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.434 qpair failed and we were unable to recover it. 00:38:21.434 [2024-12-09 05:31:35.235124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.235165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.235531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.235574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.235946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.236364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.236404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.236797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.236857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.237226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.237268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.237673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.237713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.238101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.238143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.238513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.238554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.238830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.238875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.239215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.239256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.239615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.239991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.240032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.240304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.240344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.240728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.240769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.241199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.241241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.241601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.241643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.241892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.241934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.242282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.242322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.242747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.243147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.243187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.243561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.243602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.243783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.243841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.244222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.244262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.244614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.244654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.245031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.245076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.245518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.245558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.245917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.245959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.246258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.246297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.246605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.246652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.246997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.247410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.247450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.247826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.247868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.248257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.248299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.248661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.248701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.435 qpair failed and we were unable to recover it. 00:38:21.435 [2024-12-09 05:31:35.249081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.435 [2024-12-09 05:31:35.249122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.249500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.249540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.249908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.249950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.250305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.250345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.250721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.250760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.250993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.251036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.251427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.251469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.251826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.251868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.252247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.252287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.252667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.252709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.252996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.253039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.253403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.253443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.253799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.254256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.254539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.254583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.254851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.254895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.255160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.255199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.255584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.255624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.255966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.256009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.256459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.256498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.256759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.256797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.257184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.257226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.257591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.257632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.257972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.258014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.258436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.258477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.258840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.258881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.259256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.259298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.259668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.259709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.260082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.260123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.260494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.260534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.260954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.261001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.261272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.261326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.261689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.261729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.262016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.262057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.436 [2024-12-09 05:31:35.262430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.436 [2024-12-09 05:31:35.262476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.436 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.262841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.262884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.263223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.263263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.263622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.263661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.264024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.264066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.264408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.264448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.264847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.264889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.265257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.265297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.265674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.265715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.266078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.266120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.266445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.266854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.266894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.267266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.267307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.267672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.267712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.268100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.268141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.268498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.268538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.268891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.268934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.269297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.269339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.269696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.269736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.270008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.270049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.270466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.270832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.270873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.271145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.271189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.271549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.271589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.271965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.272008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.272352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.272392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.272759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.272799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.273171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.273212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.273579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.273619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.273988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.274030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.274410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.274450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.437 qpair failed and we were unable to recover it. 00:38:21.437 [2024-12-09 05:31:35.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.437 [2024-12-09 05:31:35.274907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.275314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.275677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.275716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.275948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.275991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.276337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.276377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.276737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.276777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.277138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.277180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.277539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.277581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.277723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.277766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.278144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.278193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.278456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.278495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.278915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.279316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.279356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.279717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.279758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.280128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.280170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.280545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.280585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.280857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.280899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.281353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.281394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.281653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.281692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.281960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.282001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.282358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.282398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.282780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.282830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.283210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.283250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.283553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.283593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.283947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.283987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.284380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.284422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.284692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.284735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.285121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.285162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.285520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.285559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.285925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.285968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.286299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.286352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.286713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.286753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.287132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.287174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.287544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.438 [2024-12-09 05:31:35.287585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.438 qpair failed and we were unable to recover it. 00:38:21.438 [2024-12-09 05:31:35.287980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.288021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.288378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.288419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.288790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.288854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.289223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.289266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.289622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.289661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.289920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.289963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.290322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.290363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.290633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.290674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.290915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.290957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.291328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.291369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.291693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.291733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.292151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.292193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.292551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.292591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.292955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.292997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.293374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.293415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.293790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.293848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.294117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.294162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.294534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.294574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.294949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.294991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.295268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.295550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.295592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.295967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.296008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.296372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.296413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.296833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.297185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.297225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.297579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.297619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.298007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.298047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.298298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.298339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.298718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.298759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.299140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.299182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.299536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.299577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.299946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.300191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.300230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.300594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.300634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.300993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.439 [2024-12-09 05:31:35.301036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.439 qpair failed and we were unable to recover it. 00:38:21.439 [2024-12-09 05:31:35.301383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.301845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.301887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.302251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.302292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.302664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.302704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.302985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.303027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.303431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.303804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.303868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.304216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.304667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.305050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.305448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.305488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.305846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.305888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.306258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.306300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.306672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.306712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.306965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.307354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.307394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.307646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.307686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.308050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.308092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.308453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.308916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.308958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.309315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.309364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.309720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.309760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.310131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.310173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.310555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.310594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.310857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.310903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.311271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.311331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.311715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.311755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.311996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.312039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.312420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.312461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.312828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.312870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.313158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.313197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.313558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.313597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.313963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.314006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.440 qpair failed and we were unable to recover it. 00:38:21.440 [2024-12-09 05:31:35.314372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.440 [2024-12-09 05:31:35.314412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.314689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.314729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.315114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.315156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.315573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.315976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.316017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.316379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.316419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.316833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.317203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.317245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.317639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.318073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.318116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.318470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.318510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.318931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.318974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.319230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.319270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.319648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.319688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.319963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.320009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.320375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.320416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.320770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.320809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.321204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.321245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.321605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.321645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.322031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.322074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.322444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.322484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.322719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.322762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.323146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.323187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.323556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.323596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.323955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.323996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.324349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.324388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.324726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.324765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.325205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.325254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.325603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.325644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.326015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.326056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.326448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.326866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.326910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.327259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.327299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.327671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.327712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.328072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.328114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.328492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.441 [2024-12-09 05:31:35.328534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.441 qpair failed and we were unable to recover it. 00:38:21.441 [2024-12-09 05:31:35.328924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.328965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.329323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.329362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.329763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.329803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.330182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.330224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.330589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.330628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.330994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.331037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.331453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.331690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.331734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.332122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.332164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.332515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.332901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.332942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.333376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.333417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.333789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.333840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.334200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.334240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.334607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.334647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.335011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.335435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.335475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.335718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.335761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.336183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.336554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.336596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.336813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.336884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.337243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.337283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.337647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.337688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.338128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.338171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.338525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.338564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.338892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.338933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.339302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.339613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.339657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.340032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.340074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.340441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.340482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.442 [2024-12-09 05:31:35.340850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.442 [2024-12-09 05:31:35.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.442 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.341269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.341316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.341554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.341595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.341973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.342015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.342385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.342425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.342793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.342846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.343217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.343258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.343530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.343570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.343957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.343998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.344376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.344418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.344760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.344802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.345265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.345634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.345673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.345917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.345964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.346328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.346633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.346676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.346836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.347276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.347318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.347689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.347729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.347989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.348031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.348422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.348462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.348788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.349069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.349114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.349504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.349861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.349902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.350269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.350311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.350679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.351097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.351138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.351507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.351548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.351791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.351844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.352259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.352300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.352638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.352680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.353071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.353114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.353496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.353538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.353949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.353990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.443 qpair failed and we were unable to recover it. 00:38:21.443 [2024-12-09 05:31:35.354359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.443 [2024-12-09 05:31:35.354400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.354644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.354688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.355048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.355090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.355419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.355459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.355698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.355742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.356102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.356144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.356431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.356478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.356854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.356896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.357271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.357311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.357680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.357720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.358094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.358136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.358444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.358485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.358842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.358884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.359230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.359270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.359647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.359689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.360093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.360135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.360410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.360448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.360809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.360860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.361134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.361175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.361553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.361605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.361968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.362010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.362381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.362421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.362789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.362854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.363219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.363259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.363510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.363553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.363931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.363973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.364336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.364377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.364745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.364785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.365164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.365205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.365562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.365601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.365968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.366011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.366352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.366392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.366749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.366788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.367166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.367208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.367583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.367624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.367993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.444 [2024-12-09 05:31:35.368035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.444 qpair failed and we were unable to recover it. 00:38:21.444 [2024-12-09 05:31:35.368399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.368439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.368739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.369156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.369199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.369566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.369608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.369970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.370011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.370377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.370416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.370687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.370728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.371172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.371213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.371570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.371610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.371956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.371998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.372339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.372385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.372735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.372776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.373142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.373184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.373556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.373595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.373968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.374010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.374356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.374396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.374752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.374791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.375160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.375201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.375583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.375623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.375967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.376009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.376384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.376424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.376789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.377206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.377247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.377618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.377972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.378014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.378425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.378465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.378833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.378876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.379239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.379653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.379692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.379939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.379983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.380369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.380410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.380754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.381228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.381593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.381634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.382013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.382056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.382424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.382464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.382775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.445 qpair failed and we were unable to recover it. 00:38:21.445 [2024-12-09 05:31:35.383196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.445 [2024-12-09 05:31:35.383237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.383644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.384010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.384051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.384468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.384508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.384886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.384927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.385297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.385338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.385694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.385734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.386111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.386152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.386528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.386568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.386947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.386990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.387349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.387401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.387747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.387788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.388060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.388100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.388457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.388503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.388850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.388891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.389269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.389309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.389672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.389711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.390078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.390122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.390482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.390523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.390895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.390936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.391359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.391400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.391774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.392193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.392234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.392595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.392635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.392985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.393027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.393380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.393420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.393778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.393827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.394207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.394248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.394592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.394632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.394841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.394884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.395248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.395288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.395661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.395700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.395984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.396026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.396426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.396467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.396836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.396877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.397242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.397282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.397683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.397723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.397984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.446 [2024-12-09 05:31:35.398027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.446 qpair failed and we were unable to recover it. 00:38:21.446 [2024-12-09 05:31:35.398297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.398342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.398713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.398957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.399002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.399360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.399402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.399772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.399812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.400200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.400240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.400612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.400652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.401055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.401098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.401356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.401396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.401679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.401719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.402109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.402150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.402549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.402591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.402959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.403001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.403364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.403405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.403883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.404245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.404288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.404654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.404695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.405064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.405105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.405497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.405538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.405961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.406004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.406349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.406390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.406693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.447 [2024-12-09 05:31:35.407060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.447 [2024-12-09 05:31:35.407102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.447 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.407479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.407523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.407896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.407939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.408305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.408345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.408615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.408656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.409104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.409148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.409517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.409558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.409803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.409858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.410252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.410294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.410576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.410621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.411026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.411069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.411441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.411483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.411845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.411886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.412300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.412346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.412724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.412778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.413158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.413201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.413454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.413495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.413854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.413898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.414269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.414310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.414658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.414699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.722 [2024-12-09 05:31:35.415062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.722 [2024-12-09 05:31:35.415111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.722 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.415559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.415803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.415856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.416241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.416281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.416639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.416680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.417063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.417107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.417476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.417516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.417874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.417917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.418274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.418315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.418685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.418727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.419107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.419149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.419526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.419566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.419906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.419948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.420330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.420372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.420774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.420826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.421186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.421580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.421621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.421994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.422037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.422414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.422455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.422792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.422843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.423107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.423147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.423905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.423948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.424301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.424341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.424618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.424658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.425029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.425073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.425450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.425490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.425835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.425877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.426260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.426301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.426684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.426727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.426982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.427024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.427389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.427429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.427859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.427900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.428164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.428209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.428469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.723 [2024-12-09 05:31:35.428512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.723 qpair failed and we were unable to recover it. 00:38:21.723 [2024-12-09 05:31:35.428890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.428933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.429289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.429330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.429714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.429755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.430147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.430550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.430590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.430975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.431023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.431410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.431451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.431846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.431913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.432325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.432367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.432722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.432763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.433147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.433191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.433560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.433600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.433950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.433992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.434233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.434274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.434664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.434706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.435076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.435118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.435292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.435333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.435708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.435749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.436121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.436163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.436543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.436584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.436973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.437263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.437631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.437674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.438032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.438090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.438485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.438527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.438884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.438926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.439297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.439339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.439576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.439621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.439958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.440350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.440392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.440775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.440830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.441249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.441290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.441659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.441701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.442065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.442107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.724 [2024-12-09 05:31:35.442483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.724 [2024-12-09 05:31:35.442525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.724 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.442990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.443247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.443287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.443643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.443683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.444091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.444404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.444449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.444859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.444902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.445301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.445341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.445717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.445759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.446015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.446059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.446294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.446334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.446715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.446763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.447154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.447197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.447544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.447585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.447985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.448028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.448403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.448444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.448802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.448855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.449239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.449281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.449679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.449720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.450109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.450152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.450542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.450584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.450959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.451002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.451354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.451396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.451795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.451846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.452250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.452294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.452718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.453010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.453361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.453401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.453787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.453839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.454217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.454258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.454678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.454720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.454996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.455038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.455410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.455452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.455787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.455836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.456244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.456285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.725 [2024-12-09 05:31:35.456551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.725 [2024-12-09 05:31:35.456592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.725 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.456865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.456911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.457271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.457312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.457678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.457720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.458097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.458140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.458514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.458556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.458927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.458970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.459341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.459381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.459729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.459769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.460044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.460087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.460482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.460523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.460873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.460915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.461173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.461213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.461471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.461516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.461898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.461940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.462284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.462325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.462561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.462952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.462995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.463265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.463320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.463683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.463724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.464091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.464133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.464543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.464584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.464972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.465014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.465399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.465440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.465787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.465849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.466255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.466297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.466654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.466694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.467057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.467099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.467508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.467857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.467899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.468266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.468307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.468585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.468624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.469008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.469460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.469502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.469873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.469915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.470285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.470325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.470708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.470748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.726 [2024-12-09 05:31:35.471119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.726 [2024-12-09 05:31:35.471162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.726 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.471508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.471549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.471882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.471924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.472272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.472312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.472687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.472729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.473089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.473130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.473372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.473413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.473802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.473870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.474212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.474254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.475052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.475391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.475431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.475768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.475810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.476179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.476220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.476581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.476621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.476957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.476998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.477331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.477372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.477742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.477782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.478191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.478589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.478635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.479009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.479052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.479401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.479441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.479747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.479789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.480155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.480195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.480561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.480603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.480974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.481017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.481782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.481833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.482104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.482147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.482503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.482543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.482882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.482932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.483224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.483263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.483518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.727 [2024-12-09 05:31:35.483563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.727 qpair failed and we were unable to recover it. 00:38:21.727 [2024-12-09 05:31:35.483947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.483989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.484288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.484326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.484665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.484704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.485031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.485073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.485404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.485444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.485716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.485755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.486130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.486172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.486551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.486592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.486967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.487008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.487330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.487370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.487706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.488108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.488151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.488505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.488559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.488919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.488961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.489308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.489347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.489625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.489666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.490023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.490064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.490419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.490459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.490797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.490849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.491194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.491235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.491605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.491644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.492097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.492139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.492497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.492537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.492883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.492925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.493272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.493311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.493552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.493592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.493953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.494003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.494337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.494379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.494729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.494770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.495118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.495160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.495545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.495584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.495948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.495991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.496364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.496406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.496785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.496837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.497205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.497248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.497617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.497659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.498024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.728 [2024-12-09 05:31:35.498064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.728 qpair failed and we were unable to recover it. 00:38:21.728 [2024-12-09 05:31:35.498425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.498465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.498838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.498880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.499257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.499299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.499561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.499601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.499841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.499887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.500300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.500340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.500703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.500744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.501166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.501472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.501512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.501870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.501911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.502283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.502325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.502636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.502676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.503045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.503086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.503431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.503470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.503838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.503881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.504240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.504280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.504554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.504595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.504967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.505009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.505415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.505779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.505866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.506236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.506277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.506661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.506701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.507074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.507117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.507438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.507479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.507836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.507878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.508240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.508281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.508694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.509061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.509103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.509375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.509415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.509796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.509852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.510136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.510177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.510524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.510564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.510926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.510967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.511327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.511368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.511741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.511783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.729 [2024-12-09 05:31:35.512162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.729 [2024-12-09 05:31:35.512202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.729 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.512564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.512604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.512876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.512917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.513313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.513355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.513668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.514100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.514142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.514495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.514535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.514913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.514956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.515202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.515247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.515625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.515666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.515919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.515960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.516341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.516383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.516750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.516791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.517156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.517197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.517611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.517995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.518038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.518453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.518494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.518843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.518885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.519248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.519289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.519578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.519619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.519994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.520035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.520407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.520448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.520873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.520915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.521170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.521211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.521562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.521984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.522025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.522406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.522791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.523212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.523252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.523608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.523649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.523992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.524390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.524432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.524850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.524893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.525186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.525226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.525524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.525570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.525884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.526286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.730 [2024-12-09 05:31:35.526327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.730 qpair failed and we were unable to recover it. 00:38:21.730 [2024-12-09 05:31:35.526685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.526726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.527116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.527156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.527436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.527476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.527837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.527879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.528230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.528270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.528686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.528726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.529121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.529164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.529541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.529581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.529960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.530003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.530360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.530400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.530839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.530882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.531227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.531267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.531625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.531665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.532095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.532139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.532486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.532527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.532863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.533184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.533237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.533607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.533648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.534042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.534085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.534348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.534388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.534766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.534806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.535168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.535209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.535604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.535645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.536026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.536068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.536440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.536846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.536887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.537166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.537207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.537585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.537625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.537989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.538029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.538358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.538399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.538840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.538886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.539264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.539318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.539687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.539728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.540062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.540105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.540468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.540508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.731 qpair failed and we were unable to recover it. 00:38:21.731 [2024-12-09 05:31:35.540921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.731 [2024-12-09 05:31:35.540964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.541361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.541402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.541687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.541733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.542127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.542504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.542544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.542929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.542970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.543340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.543380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.543723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.543764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.544143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.544186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.544548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.544588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.544988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.545030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.545317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.545357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.545704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.545745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.546023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.546064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.546302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.546343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.546581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.547018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.547061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.547428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.547468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.547842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.547884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.548237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.548277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.548477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.548521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.548901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.548942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.549307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.549347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.549703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.549743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.550124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.550167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.550527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.550568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.550945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.550986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.551407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.551448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.551813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.551880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.552235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.552277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.552659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.552699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.553069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.553112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.553472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.553513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.553885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.553926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.554272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.554312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.554700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.732 [2024-12-09 05:31:35.554742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.732 qpair failed and we were unable to recover it. 00:38:21.732 [2024-12-09 05:31:35.555098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.555139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.555520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.555560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.555848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.555894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.556284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.556325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.556605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.556644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.557025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.557067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.557311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.557357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.557605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.557648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.557916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.557956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.558312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.558352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.558706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.558746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.559107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.559421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.559464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.559856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.559915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.560268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.560309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.560669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.560709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.561075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.561117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.561483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.561890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.561932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.562301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.562340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.562613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.562653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.563031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.563072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.563329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.563368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.563621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.563666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.564031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.564087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.564466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.564507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.564867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.564909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.565292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.565332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.565665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.566111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.566153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.566514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.566928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.566969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.567336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.567377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.567761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.568179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.568221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.568591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.568632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.568979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.733 [2024-12-09 05:31:35.569021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.733 qpair failed and we were unable to recover it. 00:38:21.733 [2024-12-09 05:31:35.569396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.569436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.569792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.569844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.570210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.570251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.570626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.570666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.571042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.571084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.571462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.571502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.571853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.571895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.572305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.572576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.572620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.572988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.573037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.573416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.573456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.573715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.573755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.574158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.574202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.574621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.574662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.574990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.575032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.575394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.575434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.575702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.575743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.576028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.576070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.576427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.576467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.576836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.576878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.577222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.577262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.577516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.577924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.577966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.578341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.578381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.578717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.578757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.579141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.579183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.579411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.579454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.579812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.579864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.580231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.580272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.580521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.580565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.580942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.581354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.581394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.581672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.582060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.582103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.582446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.582486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.582752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.582802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.583196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.583240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.734 [2024-12-09 05:31:35.583585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.734 [2024-12-09 05:31:35.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.734 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.584008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.584050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.584426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.584466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.584814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.584865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.585235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.585275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.585633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.585673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.586105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.586145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.586508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.586548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.586859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.586904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.587269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.587309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.587670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.588067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.588109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.588518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.588886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.588930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.589316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.589370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.589712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.589751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.590143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.590186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.590560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.590601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.590963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.591005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.591333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.591373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.591758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.591799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.592184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.592224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.592571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.592612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.592888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.592929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.593275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.593315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.593693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.593734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.594109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.594151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.594519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.594559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.594928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.594969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.595397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.595438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.595837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.595880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.596230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.596270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.596608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.596648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.596977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.597019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.597384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.597426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.597796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.597846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.735 qpair failed and we were unable to recover it. 00:38:21.735 [2024-12-09 05:31:35.598210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.735 [2024-12-09 05:31:35.598250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.598625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.598665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.599032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.599074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.599433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.599475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.599838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.599879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.600266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.600306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.600669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.600709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.601109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.601360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.601404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.601775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.601826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.602181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.602221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.602649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.602689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.603051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.603093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.603365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.603405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.603754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.603794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.604177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.604220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.604446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.604488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.604908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.604951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.605390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.605765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.605805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.606205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.606248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.606608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.606649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.607031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.607073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.607341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.607385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.607755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.607795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.608153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.608194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.608426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.608469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.608856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.608898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.609261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.609301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.609557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.609600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.610020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.610426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.610466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.610712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.610752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.611132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.736 [2024-12-09 05:31:35.611174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.736 qpair failed and we were unable to recover it. 00:38:21.736 [2024-12-09 05:31:35.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.611584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.611958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.612000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.612359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.612399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.612767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.612807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.613219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.613261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.613522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.613564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.613933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.613975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.614343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.614385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.614757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.614859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.615112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.615165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.615519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.615559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.615935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.616340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.616381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.616731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.616770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.617191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.617233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.617610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.617650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.618027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.618069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.618440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.618481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.618848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.618890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.619148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.619191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.619585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.619626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.619982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.620024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.620373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.620412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.620783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.620837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.621191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.621231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.621583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.621623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.622003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.622045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.622413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.622453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.622831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.622874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.623158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.623199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.623549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.623860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.623901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.624245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.624285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.624651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.624690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.625047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.625089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.625431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.625471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.625704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.625748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.737 qpair failed and we were unable to recover it. 00:38:21.737 [2024-12-09 05:31:35.626096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.737 [2024-12-09 05:31:35.626138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.626507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.626908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.626949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.627289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.627329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.627702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.627741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.628101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.628142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.628497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.628537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.628906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.628947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.629301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.629341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.629711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.629751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.630144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.630187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.630558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.630598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.630955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.631036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.631686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.631726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.632056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.632097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.632458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.632498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.632833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.632872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.633239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.633279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.633645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.633692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.634048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.634090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.634513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.634553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.634828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.634872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.635286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.635327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.635682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.635722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.636089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.636130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.636509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.636550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.636849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.636891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.637232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.637272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.637682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.638047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.638090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.638453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.638493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.638799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.638858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.639108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.639149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.639499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.639915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.639977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.640347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.640388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.640808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.738 [2024-12-09 05:31:35.640861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.738 qpair failed and we were unable to recover it. 00:38:21.738 [2024-12-09 05:31:35.641224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.641265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.641638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.641679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.642060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.642102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.642435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.642475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.642814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.642864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.643129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.643169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.643506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.643546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.643781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.643834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.644245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.644286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.644637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.644676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.645017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.645059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.645399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.645439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.645813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.645863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.646223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.646263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.646641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.646695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.647061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.647103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.647453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.647494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.647838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.647881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.648154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.648197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.648468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.648509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.648777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.648830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.649209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.649250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.649626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.649666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.650029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.650070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.650397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.650438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.650798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.650849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.651253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.651547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.651986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.652029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.652397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.652438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.652833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.652875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.653221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.653261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.653651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.653691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.653868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.653910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.654272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.654312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.654663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.654704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.655059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.655102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.739 [2024-12-09 05:31:35.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.739 [2024-12-09 05:31:35.655511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.739 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.655856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.655898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.656157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.656199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.656592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.656633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.657009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.657051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.657418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.657459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.657655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.657695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.658055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.658097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.658437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.658477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.658861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.658903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.659247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.659288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.659684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.659725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.660132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.660174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.660560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.660600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.660934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.661376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.661417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.661686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.661725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.662109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.662158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.662579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.662975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.663018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.663389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.663429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.663837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.663879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.664278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.664319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.664723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.664768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.665104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.665158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.665526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.665566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.665940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.665982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.666220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.666627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.666668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.667028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.667416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.667456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.667827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.667870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.668175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.668570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.668610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.668977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.669018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.669378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.669419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.669653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.669693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.740 [2024-12-09 05:31:35.669951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.740 [2024-12-09 05:31:35.669993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.740 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.670357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.670397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.670741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.670781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.671168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.671601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.671642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.671967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.672009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.672375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.672416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.672793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.672846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.673221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.673262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.673630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.673671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.674031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.674073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.674459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.674500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.674841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.674883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.675231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.675273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.675546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.675587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.675806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.675857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.676263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.676303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.676553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.676593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.676990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.677032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.677385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.677788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.677845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.678092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.678136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.678540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.678782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.678844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.679210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.679251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.679622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.679663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.679802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.679859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.680147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.680189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.680564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.680606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.680970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.681012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.681369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.681410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.681775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.681825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.682199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.682585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.741 [2024-12-09 05:31:35.682625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.741 qpair failed and we were unable to recover it. 00:38:21.741 [2024-12-09 05:31:35.682964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.683007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.683407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.683448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.683828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.683871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.684290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.684330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.684688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.684728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.685080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.685123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.685320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.685361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.685639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.685679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.686047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.686476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.686516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.686928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.686969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.687356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.687396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.687755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.688185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.688226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.688586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.688626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.688979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.689021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.689433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.689474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.689859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.689919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.690183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.690600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.690640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.690879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.690922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.691310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.691350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.691719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.691759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.692024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.692066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.692456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.692495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.692876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.692917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.693308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.693354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.693724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.693764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.694141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.694183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.694552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.694592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.694893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.695259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.695679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.696051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.696093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.696483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.696523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.696759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.697108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.742 [2024-12-09 05:31:35.697507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.742 [2024-12-09 05:31:35.697548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.742 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.697923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.697965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.698334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.698375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.698752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.698793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.699171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.699212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.699583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.699623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.699935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.699978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.700297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.700338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.700706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.701114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.701156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.701423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.701463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.701724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.701764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.702144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:21.743 [2024-12-09 05:31:35.702536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.743 [2024-12-09 05:31:35.702576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:21.743 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.702933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.702976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.703345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.703385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.703651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.703691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.703948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.704376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.704418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.704702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.704741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.705106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.705147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.705519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.705561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.705912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.705953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.706200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.706240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.706603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.706644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.707103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.707531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.707571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.707804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.707874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.009 qpair failed and we were unable to recover it. 00:38:22.009 [2024-12-09 05:31:35.708248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.009 [2024-12-09 05:31:35.708288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.708661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.708708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.708945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.708990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.709398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.709439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.709838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.709879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.710232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.710272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.710637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.710677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.710969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.711010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.711165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.711205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.711399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.711439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.711832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.711875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.712249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.712290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.712648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.712688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.712997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.713038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.713410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.713449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.713838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.713881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.714230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.714282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.714519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.714563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.714850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.714892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.715263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.715303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.715657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.715697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.715981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.716023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.716411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.716451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.716876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.716917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.717288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.717328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.717693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.717732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.718108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.718149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.718509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.718548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.718834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.718879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.719263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.719304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.719671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.719710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.719946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.720340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.720380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.720762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.721169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.721210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.721557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.721598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.721999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.722042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.010 [2024-12-09 05:31:35.722434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.010 qpair failed and we were unable to recover it. 00:38:22.010 [2024-12-09 05:31:35.722857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.722899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.723318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.723358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.723761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.723801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.724153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.724200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.724480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.724519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.724710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.724749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.725126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.725167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.725527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.725567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.725932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.725974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.726320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.726360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.726727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.727144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.727185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.727473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.727513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.727866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.727907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.728186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.728224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.728594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.728634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.728891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.728931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.729355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.729395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.729810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.730186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.730228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.730596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.730998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.731040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.731376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.731415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.731772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.731812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.732172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.732212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.732581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.732621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.733024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.733066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.733436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.733476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.733841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.733882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.734233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.734273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.734609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.734650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.735021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.735062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.735420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.735459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.735873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.735915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.736203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.736242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.736629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.736669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.736971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.737013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.011 [2024-12-09 05:31:35.737405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.011 [2024-12-09 05:31:35.737445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.011 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.737806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.737856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.738221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.738261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.738634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.738673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.739025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.739067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.739403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.739455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.739856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.739904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.740269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.740308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.740665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.740704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.741125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.741166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.741524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.741564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.741926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.741968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.742304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.742345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.742734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.742773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.743186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.743227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.743575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.743974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.744014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.744289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.744328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.744688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.744727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.745110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.745151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.745526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.745566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.745832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.746240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.746281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.746525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.746567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.746800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.746853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.747227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.747269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.747522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.747562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.747893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.747935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.748294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.748334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.748738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.749117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.749159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.749547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.749586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.749937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.749979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.750343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.750383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.750663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.750703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.751084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.751126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.752013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.752055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.012 qpair failed and we were unable to recover it. 00:38:22.012 [2024-12-09 05:31:35.752327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.012 [2024-12-09 05:31:35.752367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.752749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.752788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.753045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.753086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.753329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.753374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.753745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.753785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.754140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.754182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.754558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.754598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.754957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.754998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.755363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.755782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.755839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.756228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.756268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.756648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.756688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.757056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.757098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.757473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.757514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.757891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.757932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.758186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.758225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.758601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.758641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.758976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.759442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.759482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.759750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.759789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.760159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.760201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.760559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.760599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.760867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.760910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.761278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.761318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.761663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.761702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.762139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.762181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.762550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.762591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.762929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.762970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.763318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.763358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.763683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.763723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.764084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.764126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.764492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.764546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.764920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.764962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.765324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.765363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.765685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.765724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.766153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.766195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.766565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.766604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.766973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.013 [2024-12-09 05:31:35.767014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.013 qpair failed and we were unable to recover it. 00:38:22.013 [2024-12-09 05:31:35.767387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.767427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.767792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.767851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.768245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.768285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.768631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.768670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.768943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.768985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.769418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.769458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.769832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.769873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.770134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.770176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.770503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.770544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.770901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.771299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.771667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.771707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.772041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.772082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.772456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.772498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.772875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.772917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.773264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.773303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.773670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.773710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.773929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.773972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.774331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.774370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.774728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.774767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.775148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.775190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.775555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.775989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.776031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.776375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.776790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.776840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.777210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.777591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.777630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.778009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.778050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.778396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.778435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.778813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.778862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.779220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.779261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.779525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.014 [2024-12-09 05:31:35.779565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.014 qpair failed and we were unable to recover it. 00:38:22.014 [2024-12-09 05:31:35.779952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.779994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.780360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.780674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.780716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.781066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.781108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.781480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.781520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.781893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.781937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.782309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.782349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.782611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.782654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.783021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.783064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.783390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.783431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.783792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.783842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.784216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.784256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.784625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.784665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.785023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.785066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.785411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.785450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.785789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.785840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.786204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.786244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.786608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.786647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.787012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.787060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.787475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.787515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.787890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.787932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.788279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.788319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.788661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.788701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.789064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.789105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.789573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.789614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.789974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.790029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.790394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.790798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.790848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.791211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.791252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.791617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.791657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.792050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.792091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.792460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.792499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.792880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.792922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.793286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.793326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.793696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.793736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.015 [2024-12-09 05:31:35.794112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.015 [2024-12-09 05:31:35.794153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.015 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.794527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.794568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.794843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.794883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.795170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.795210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.795581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.795621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.795983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.796025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.796452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.796492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.796866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.796908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.797284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.797324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.797629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.797669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.798022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.798064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.798425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.798465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.798867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.798907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.799340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.799772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.799813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.800189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.800230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.800598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.800638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.800997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.801039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.801382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.801422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.801664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.801704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.802053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.802093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.802465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.802505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.802842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.803263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.803302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.803670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.803710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.804086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.804127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.804391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.804429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.804825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.804866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.805231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.805270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.805628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.805667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.805936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.805976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.806310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.806721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.806759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.807028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.807072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.807429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.807471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.807852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.807895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.808180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.808224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.808591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.808633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.808991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.809033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.809419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.809459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.809797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.809861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.810199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.810240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.810547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.810587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.812350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.812420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.812777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.812832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.813125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.813547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.813587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.813960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.814002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.814359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.814400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.814786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.815184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.815233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.815603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.815643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.816014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.816056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.816471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.816860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.816918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.817283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.817324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.817689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.817728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.016 [2024-12-09 05:31:35.818071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.016 [2024-12-09 05:31:35.818112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.016 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.818455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.818494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.818836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.818877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.819243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.819283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.819639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.819679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.820105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.820146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.820503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.820923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.820964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.821326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.821366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.821744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.821785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.822162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.822203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.822559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.822599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.823018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.823060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.823482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.823804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.823858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.824202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.824243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.824576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.824615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.825033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.825076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.825829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.825871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.826329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.826689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.826728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.827094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.827366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.827409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.827790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.827840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.828073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.828112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.828503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.828543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.828920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.828962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.829204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.829619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.829658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.830022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.830062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.830431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.830471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.830880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.830922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.831188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.831236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.831596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.831636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.831911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.831952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.832309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.832349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.832705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.832745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.833101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.833145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.833520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.833560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.833932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.833974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.834244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.834285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.834597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.834989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.835240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.835285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.835654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.835694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.836033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.836074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.836329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.836369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.836732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.836772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.837147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.837189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.837362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.837405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.837860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.837903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.838304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.838674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.838713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.839077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.839118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.017 [2024-12-09 05:31:35.839483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.017 [2024-12-09 05:31:35.839523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.017 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.839898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.839939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.840308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.840349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.840719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.840759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.841137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.841179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.841478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.841521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.841892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.841946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.842293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.842334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.842675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.842714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.843059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.843100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.843447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.843486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.843855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.843896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.844286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.844642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.844682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.845038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.845080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.845439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.845480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.845854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.845896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.846251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.846291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.846632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.846684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.847040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.847460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.847500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.847886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.847946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.848331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.848371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.848747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.848787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.849057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.849102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.849361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.849400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.849786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.849841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.850205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.850246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.850588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.850628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.851002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.851045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.851320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.851360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.851727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.851766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.852143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.852185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.852548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.852589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.852929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.852972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.853315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.853356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.853711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.854017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.854058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.854371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.854731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.854771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.855064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.855441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.855481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.855841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.855882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.856245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.856285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.856574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.856619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.857014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.857056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.857424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.857464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.857859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.857903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.858152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.858195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.858555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.858595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.858952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.858993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.859367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.859408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.859784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.859834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.860187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.860227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.860583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.860622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.861025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.861402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.861442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.861825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.861867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.862236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.862284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.862653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.018 [2024-12-09 05:31:35.862692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.018 qpair failed and we were unable to recover it. 00:38:22.018 [2024-12-09 05:31:35.863104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.863146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.863513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.863553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.863893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.863934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.864335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.864578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.864617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.864972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.865013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.865425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.865465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.865836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.865877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.866249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.866288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.866664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.866705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.867068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.867124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.867493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.867534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.867906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.867949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.868297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.868337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.868600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.868642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.868968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.869008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.869257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.869301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.869671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.869712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.870070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.870112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.870371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.870415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.870823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.870866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.871193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.871232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.871589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.871629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.871851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.871910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.872257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.872297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.872659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.872700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.873062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.873104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.873485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.873525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.873834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.873875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.874115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.874154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.874530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.874571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.874945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.874986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.875350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.875389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.875736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.875775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.876044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.876086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.876350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.876389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.876750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.876789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.877252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.877611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.877656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.878020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.878061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.878423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.878463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.878826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.878867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.879232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.879272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.879610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.879650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.879985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.880026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.880382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.880422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.880756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.880795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.881173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.881212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.881469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.881507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.881887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.881928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.882279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.882318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.882591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.882635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.883003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.883045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.883389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.883429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.883803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.883855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.884228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.884267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.884644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.884692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.885069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.885110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.885481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.885519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.019 [2024-12-09 05:31:35.885862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.019 [2024-12-09 05:31:35.885909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.019 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.886267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.886307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.886659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.887077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.887118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.887555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.887596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.888008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.888387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.888428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.888674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.889059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.889100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.889459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.889499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.889754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.889794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.890171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.890212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.890584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.890623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.890995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.891036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.891433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.891475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.891827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.891886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.892245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.892285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.892631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.892671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.892925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.892968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.893382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.893760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.893801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.894166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.894207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.894554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.894594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.894953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.894995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.895302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.895344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.895705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.895745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.895997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.896041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.896402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.896442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.896856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.897199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.897581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.897622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.897974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.898016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.898438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.898865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.898907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.899262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.899302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.899669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.899710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.900077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.900118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.900476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.900516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.900859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.900901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.901209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.901249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.901607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.901905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.901945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.902182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.902223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.902596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.902636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.902906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.902946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.903329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.903370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.903736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.903776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.904137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.904178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.904531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.904580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.904832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.904872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.905241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.905281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.905659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.905699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.906107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.906148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.906509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.906549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.906913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.906954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.907308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.907354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.020 [2024-12-09 05:31:35.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.020 [2024-12-09 05:31:35.907765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.020 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.908129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.908171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.908517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.908556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.908932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.908979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.909344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.909384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.909643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.909682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.910054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.910452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.910492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.910853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.910895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.911137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.911177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.911608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.911649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.912027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.912068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.912445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.912485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.912852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.912894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.913229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.913659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.913699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.914040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.914081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.914447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.914487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.914848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.914892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.915141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.915185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.915443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.915483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.915848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.915890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.916245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.916286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.916656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.916709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.916955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.916996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.917346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.917385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.917746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.917786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.918159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.918200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.918570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.918609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.918967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.919008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.919388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.919429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.919848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.919889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.920274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.920631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.920670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.921015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.921057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.921295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.921337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.921720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.921760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.922019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.922060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.922431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.922472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.922875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.922916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.923358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.923399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.923759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.923799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.923992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.924032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.924384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.924431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.924813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.924867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.925140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.925180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.925598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.925930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.925971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.926332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.926372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.926749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.926789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.927070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.927110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.927433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.927839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.927881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.928263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.928303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.928663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.928703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.929152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.929193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.929416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.929455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.929839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.929880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.930241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.930282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.930516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.930558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.021 [2024-12-09 05:31:35.930926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.021 [2024-12-09 05:31:35.930967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.021 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.931321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.931361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.931704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.931744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.931981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.932381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.932422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.932802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.932852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.933128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.933169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.933453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.933493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.933851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.933892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.934256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.934340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.934636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.934679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.934990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.935032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.935386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.935427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.935851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.935893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.936257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.936298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.936682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.936721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.937063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.937106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.937453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.937494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.937906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.938162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.938201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.938556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.938596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.938829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.938869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.939243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.939284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.939722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.940138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.940179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.940544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.940922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.940968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.941323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.941376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.941653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.941692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.942105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.942148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.942410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.942449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.942825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.942867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.943231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.943272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.943647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.943687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.943979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.944022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.944264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.944304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.944655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.944694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.945115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.945158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.945524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.945564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.945964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.946005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.946390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.946430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.946784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.946835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.947231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.947271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.947511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.947555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.947778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.947829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.948198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.948240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.948586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.948627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.949011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.949054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.949333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.949377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.949742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.949782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.950068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.950109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.950363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.950403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.950632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.950671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.951075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.951117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.951454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.951493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.951742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.951782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.952153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.952194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.952527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.952568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.952937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.952978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.953273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.953312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.022 [2024-12-09 05:31:35.953670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.022 [2024-12-09 05:31:35.953710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.022 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.953976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.954019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.954376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.954416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.956225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.956298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.956683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.956727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.957100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.957445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.957486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.957837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.957880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.958238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.958278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.958696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.959111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.959153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.959516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.959557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.959912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.959953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.960352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.960393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.960765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.960805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.961156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.961197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.961525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.961565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.961833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.961875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.963662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.963731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.964186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.964232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.964474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.964514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.964875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.964917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.965280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.965321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.965711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.965752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.966129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.966171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.966555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.966595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.966970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.967013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.967383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.967424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.967656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.967698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.968065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.968491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.968533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.968887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.968942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.969285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.969326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.969677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.969717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.970155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.970196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.970553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.970593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.970957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.970999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.971345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.971385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.971738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.971778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.972135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.972177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.972545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.972585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.976852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.976939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.977349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.977404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.977776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.977844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.978263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.978644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.978695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.979029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.979076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.979457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.979506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.979880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.979923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.980269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.980310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.980685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.981015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.981064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.981451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.981493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.981917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.981959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.982377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.982739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.023 [2024-12-09 05:31:35.982779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.023 qpair failed and we were unable to recover it. 00:38:22.023 [2024-12-09 05:31:35.983486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.983539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.983928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.983972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.984340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.984645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.984683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.985054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.985096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.985353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.985393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.985728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.985768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.986134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.986176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.986535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.986576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.986935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.986977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.987263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.987597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.987638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.988027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.988068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.988415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.988455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.988844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.989130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.989174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.989510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.989551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.989923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.989967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.990309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.990349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.990628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.990673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.990957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.991008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.991377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.991417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.991853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.992221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.992261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.992627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.992667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.993034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.993076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.993434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.993473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.993837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.993886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.994269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.994309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.994587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.994955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.994997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.995288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.995329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.995687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.995727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.996026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.996072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.996444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.996484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.024 [2024-12-09 05:31:35.996846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.024 [2024-12-09 05:31:35.996887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.024 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.997256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.997297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.997665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.997705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.998178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.998551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.998606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.998949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.998992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.999266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.999307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:35.999719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:35.999761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.000061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.000104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.000477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.000517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.000886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.000928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.001288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.001327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.001603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.001642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.002025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.002066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.002474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.002513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.002760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.002800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.003161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.003202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.003451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.003494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.003864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.003905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.004267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.004307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.004664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.004704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.005045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.005085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.005443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.005483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.005730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.005768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.006037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.006081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.006446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.006752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.007093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.007134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.007535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.007973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.008015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.008384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.008424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.008792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.008844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.009202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.009242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.009610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.009651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.010044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.010280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.010323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.011092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.011134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.011528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.011568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.011935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.011977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.295 [2024-12-09 05:31:36.012319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.295 [2024-12-09 05:31:36.012359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.295 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.012719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.012760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.013126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.013168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.013511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.013551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.013891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.014295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.014336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.014710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.014752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.015112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.015153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.015501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.015542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.015910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.015952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.016320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.016360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.016673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.016713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.017091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.017133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.017392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.017434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.017810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.017877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.018233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.018272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.018630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.018670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.018908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.018951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.019357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.019397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.019747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.019793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.020164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.020205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.020552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.020592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.020872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.020914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.021278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.021318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.021679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.021719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.022138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.022496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.022537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.022925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.022968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.023337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.023435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.023793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.023844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.024204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.024243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.024650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.025043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.025376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.025416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.025748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.025788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.026152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.026194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.026543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.026583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.026861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.027249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.027618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.027657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.028009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.028051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.028395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.028434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.028795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.028844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.029205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.029507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.029546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.029942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.029988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.030241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.030281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.030639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.030679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.031079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.031123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.296 [2024-12-09 05:31:36.031513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.296 [2024-12-09 05:31:36.031553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.296 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.031926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.031967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.032296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.032717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.032756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.033187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.033228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.033586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.033626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.033887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.034308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.034348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.034709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.034749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.035124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.035166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.035496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.035543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.035988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.036030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.036372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.036412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.036811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.036861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.037109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.037149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.037514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.037553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.037918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.037959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.038378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.038417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.038782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.038831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.039197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.039237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.039613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.039908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.039953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.040320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.040361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.040728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.040767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.041152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.041201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.041475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.041515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.041977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.042018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.042393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.042433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.042807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.042874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.043222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.043262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.043615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.043655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.044022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.044063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.044442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.044482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.044854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.044895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.045291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.045650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.045689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.046041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.046083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.046464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.046504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.046836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.046878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.047253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.047292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.047638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.047678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.047928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.047968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.048327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.048368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.048641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.048693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.049080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.049121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.049358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.049400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.049732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.049773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.050134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.050177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.050555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.050595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.051018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.051061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.297 [2024-12-09 05:31:36.051415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.297 [2024-12-09 05:31:36.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.297 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.051838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.051880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.052276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.052696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.052735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.053181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.053222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.053591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.053631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.054001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.054042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.054430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.054470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.054850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.055210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.055480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.055523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.055887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.056298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.056339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.056609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.056651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.057072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.057441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.057481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.057847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.057889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.058242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.058283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.058644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.058684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.059045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.059087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.059357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.059397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.059783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.059836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.060187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.060227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.060563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.060603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.060959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.061000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.061369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.061408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.061778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.061831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.062178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.062219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.062565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.062605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.062977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.063019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.063328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.063368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.063679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.063718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.064002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.064043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.064408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.064449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.064705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.064748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.065139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.065541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.065581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.065944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.065986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.066351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.066391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.066763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.066802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.067164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.067210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.067593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.067970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.068011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.068347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.068387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.298 [2024-12-09 05:31:36.068675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.298 [2024-12-09 05:31:36.068714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.298 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.069084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.069126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.069486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.069526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.069878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.069919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.070273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.070312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.070656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.070696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.071110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.071152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.071515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.071555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.071789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.071841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.072165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.072205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.072486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.072531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.072904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.072947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.073358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.073405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.073748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.073801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.074150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.074192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.074549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.074590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.074930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.075318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.075360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.075717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.075757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.076133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.076174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.076533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.076573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.077036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.077399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.077439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.077773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.077812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.078203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.078243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.078597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.078638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.079058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.079100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.079458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.079497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.079873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.079915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.080310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.080350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.080645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.080691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.081061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.081103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.081548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.081904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.081946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.082310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.082350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.082677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.082716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.083065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.083113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.083487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.083528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.083907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.083949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.084346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.084714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.084754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.085109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.085151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.085532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.085572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.085928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.085970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.086310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.086350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.086619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.086662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.087032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.087075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.087481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.087735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.087777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.088040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.088453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.088494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.088843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.299 [2024-12-09 05:31:36.088884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.299 qpair failed and we were unable to recover it. 00:38:22.299 [2024-12-09 05:31:36.089135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.089174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.089559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.089599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.089966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.090012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.090369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.090409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.090775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.090835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.091122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.091164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.091464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.091504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.091859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.091901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.092252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.092292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.092671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.092711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.093124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.093531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.093902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.093947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.094217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.094256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.094606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.094646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.095016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.095058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.095314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.095604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.095644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.095994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.096034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.096375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.096719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.096759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.097173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.097214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.097556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.097595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.098080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.098122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.098484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.098530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.098894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.098955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.099198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.099238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.099598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.099638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.099905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.099946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.100208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.100247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.100603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.100643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.100892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.100933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.101202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.101242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.101493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.101533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.101899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.101940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.102309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.102348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.102720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.102761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.103253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.103624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.103665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.104043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.104085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.104437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.104477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.104843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.104884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.105252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.105291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.105593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.105848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.105888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.106292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.106333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.106760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.106992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.107033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.107286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.107330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.107710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.107750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.300 [2024-12-09 05:31:36.108162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.300 [2024-12-09 05:31:36.108203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.300 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.108575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.108615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.108870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.108911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.109311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.109350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.109769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.110170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.110211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.110573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.110612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.111010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.111052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.111479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.111519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.111791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.111840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.112215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.112255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.112527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.112572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.112946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.112987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.113346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.113386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.113750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.113797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.114162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.114202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.114377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.114417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.114678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.114722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.115123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.115166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.115540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.115581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.115994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.116035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.116456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.116496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.116860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.116901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.117257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.117297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.117740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.117780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.118163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.118206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.118485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.118525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.118918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.118959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.119319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.119359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.119706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.119745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.120221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.120570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.120873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.120918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.121289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.121329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.121706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.121747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.122130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.122172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.122517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.122556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.122926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.122967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.123319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.123360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.123736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.123790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.124166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.124208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.124583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.124623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.125009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.125375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.125415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.125773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.125812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.126170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.126553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.126592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.126880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.127284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.127324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.127597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.127860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.301 [2024-12-09 05:31:36.127901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.301 qpair failed and we were unable to recover it. 00:38:22.301 [2024-12-09 05:31:36.128238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.128643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.128683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.129056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.129352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.129630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.129674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.130062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.130103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.130475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.130515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.130839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.130881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.131233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.131275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.131549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.131588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.131954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.132325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.132365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.132746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.132786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.133177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.133223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.133457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.133500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.133920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.133962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.134218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.134257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.134636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.134676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.135075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.135449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.135491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.135864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.135907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.136330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.136369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.136742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.136782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.137039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.137082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.137460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.137500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.137870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.137911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.138664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.138705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.139093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.139134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.139554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.139807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.139861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.140234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.140274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.140633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.140672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.141048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.141090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.141462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.141502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.141865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.141906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.142275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.142315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.142668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.142708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.143028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.143072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.143482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.143844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.143886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.144254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.144294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.144675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.144716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.145090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.145137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.145531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.145571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.302 [2024-12-09 05:31:36.145986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.302 [2024-12-09 05:31:36.146028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.302 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.146368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.146408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.146682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.146722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.147012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.147053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.147292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.147336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.147713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.147753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.147950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.147994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.148295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.148350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.148586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.148630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.148907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.148949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.149342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.149382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.149615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.149659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.150044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.150086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.150363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.150640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.150680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.151066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.151109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.151368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.151408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.151753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.151794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.152032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.152076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.152450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.152490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.152705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.152747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.153146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.153516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.153556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.153945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.153987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.154779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.155078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.155119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.155447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.155489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.155726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.155766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.156190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.156232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.156603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.156643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.157026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.157068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.157480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.157837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.158294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.158335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.158608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.158649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.159004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.159047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.159417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.159458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.159836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.159884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.160250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.160290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.160662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.160702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.161070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.161112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.161485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.161525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.161800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.161851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.162279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.162320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.162599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.162971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.163013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.163386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.163426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.163735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.164155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.164197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.303 [2024-12-09 05:31:36.164476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.303 [2024-12-09 05:31:36.164515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.303 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.164886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.164927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.165287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.165327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.165564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.165605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.165958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.166000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.166365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.166407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.166611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.167110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.167151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.167410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.167449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.167814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.167878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.168239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.168279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.168518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.168558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.168934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.168975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.169248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.169288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.169522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.169836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.169880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.170134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.170176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.170394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.170436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.170807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.170857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.171015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.171056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.171412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.171454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.171892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.172127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.172166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.172543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.172582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.172837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.172877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.173255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.173295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.173572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.173610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.173864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.173905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.174287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.174333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.174705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.174744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.175105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.175147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.175492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.175532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.175790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.175853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.176155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.176195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.176563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.176604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.176871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.176913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.177285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.177326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.177695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.177734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.178023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.178064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.178430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.178470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.178703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.178742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.179099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.179142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.179522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.179563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.179948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.179990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.180246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.180285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.180547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.180587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.181089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.181447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.181852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.181894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.182136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.182175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.182536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.182575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.304 qpair failed and we were unable to recover it. 00:38:22.304 [2024-12-09 05:31:36.182936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.304 [2024-12-09 05:31:36.182978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.183333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.183373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.183592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.183631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.183857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.183911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.184302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.184343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.184687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.184727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.185109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.185150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.185516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.185556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.185888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.185930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.186306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.186346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.186757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.187125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.187168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.187441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.187481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.187832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.187875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.188256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.188298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.188674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.188714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.189130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.189171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.189540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.189586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.189923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.189964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.190310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.190716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.190757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.191127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.191169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.191548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.191588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.191970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.192011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.192390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.192430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.192842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.192885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.193247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.193287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.193536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.193575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.193713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.193756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.194160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.194201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.194599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.194933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.195288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.195327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.195683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.196142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.196198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.196532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.196572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.196928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.196971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.197381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.197625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.197668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.198037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.198078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.198282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.198323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.198695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.198735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.198958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.199001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.199380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.199421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.199787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.199838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.305 [2024-12-09 05:31:36.200196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.305 [2024-12-09 05:31:36.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.305 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.200580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.200978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.201018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.201396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.201437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.201796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.203546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.203613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.203897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.203942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.205508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.205571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.205809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.205881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.208174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.208563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.208606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.208940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.209266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.209313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.209678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.210115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.210157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.210508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.210548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.210827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.210867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.211234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.211274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.211632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.211673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.212038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.212079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.212316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.212356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.212729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.212769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.213127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.213169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.213446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.213490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.213858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.213901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.214153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.214537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.214580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.214941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.214982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.215345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.215384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.215748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.215788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.216167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.216209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.216602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.216966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.217007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.217380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.217428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.217958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.218427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.218470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.218750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.218790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.219182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.219224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.219591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.219631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.220004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.220053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.220390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.220430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.220747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.221112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.221154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.221410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.221449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.221836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.221878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.222215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.222255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.222596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.222636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.222997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.223038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.223380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.223749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.223789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.224149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.224191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.306 qpair failed and we were unable to recover it. 00:38:22.306 [2024-12-09 05:31:36.224548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.306 [2024-12-09 05:31:36.224589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.225012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.225062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.225437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.225492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.225868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.225913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.226285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.226325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.226699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.226738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.227065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.227107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.227501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.227864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.227905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.228290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.228330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.228736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.229105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.229146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.229502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.229543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.229915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.229957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.230308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.230348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.230718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.230758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.231171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.231213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.231588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.231628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.231987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.232028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.232387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.232428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.232670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.232713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.232979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.233021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.233397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.233439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.233772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.233812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.234163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.234204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.234471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.234515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.234878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.235270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.235310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.235641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.235689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.235855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.235900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.236228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.236268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.236655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.236695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.237077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.237435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.237475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.237835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.238289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.238329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.238661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.238702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.239047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.239089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.239430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.239469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.239837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.239878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.240241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.240283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.240647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.241043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.241085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.241334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.241375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.241753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.241794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.242168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.242209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.242575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.242986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.243027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.243364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.243404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.243624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.243668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.307 [2024-12-09 05:31:36.244069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.307 [2024-12-09 05:31:36.244111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.307 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.244472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.244512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.244867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.244909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.245248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.245288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.245658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.245699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.246163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.246525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.246564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.246925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.246966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.247323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.247364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.247719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.247758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.248123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.248165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.248411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.248451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.248782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.248831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.249210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.249587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.249627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.250013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.250058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.250402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.250455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.250835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.250877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.251241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.251293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.251656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.251696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.252055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.252096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.252471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.252510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.252738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.252777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.253170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.253212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.253570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.253609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.253975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.254018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.254381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.254421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.254797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.254849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.255213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.255607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.255646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.256022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.256065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.256434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.256474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.256715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.256758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.257185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.257228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.257609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.257649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.257952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.257994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.258331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.258372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.258699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.258740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.259126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.259167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.259505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.259544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.259889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.259931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.260275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.260315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.260688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.260728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.261095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.261437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.261477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.261846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.261888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.262226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.262265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.308 [2024-12-09 05:31:36.262605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.308 [2024-12-09 05:31:36.262645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.308 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.263016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.263057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.263424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.263464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.263847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.263913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.264189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.264230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.264461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.264500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.264871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.264913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.265190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.265233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.265601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.265641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.266017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.266060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.266303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.266348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.266689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.266735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.267098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.267140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.267509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.267549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.267918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.267960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.268324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.268364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.268732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.268772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.269176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.269544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.269584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.269954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.269997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.270364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.270404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.270633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.270676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.271033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.271075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.271441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.271481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.271852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.271894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.272248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.272289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.272622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.272662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.272963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.273004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.273262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.273305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.273667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.273707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.274096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.274472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.274891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.274935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.275300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.275353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.275627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.275666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.276064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.276106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.276469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.276509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.276760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.276799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.277160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.277202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.277608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.277648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.309 [2024-12-09 05:31:36.277988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.309 [2024-12-09 05:31:36.278030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.309 qpair failed and we were unable to recover it. 00:38:22.579 [2024-12-09 05:31:36.278385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.579 [2024-12-09 05:31:36.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.579 qpair failed and we were unable to recover it. 00:38:22.579 [2024-12-09 05:31:36.278803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.278855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.279209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.279250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.279619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.279659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.280013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.280054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.280326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.280369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.280749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.280789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.281226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.281268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.281629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.281670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.282027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.282068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.282304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.282355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.282700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.282740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.283119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.283162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.283579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.283619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.283984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.284025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.284370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.284409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.284765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.285162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.285202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.285558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.285598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.285838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.285882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.286285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.286325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.286692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.286732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.287102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.287145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.287512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.287552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.287978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.288020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.288393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.288433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.288761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.288799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.289151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.289191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.289546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.289586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.289944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.289985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.290398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.290438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.290717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.290756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.291134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.580 [2024-12-09 05:31:36.291176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.580 qpair failed and we were unable to recover it. 00:38:22.580 [2024-12-09 05:31:36.291544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.291585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.291857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.291901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.292350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.292398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.292750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.292790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.293149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.293190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.293564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.293603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.293979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.294021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.294362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.294402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.294723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.294763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.295181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.295223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.295670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.295710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.295928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.295972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.296327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.296367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.296616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.296659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.296986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.297027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.297371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.297683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.297722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.298104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.298152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.298533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.298573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.298842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.298883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.299278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.299318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.299554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.299979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.300023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.300394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.300447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.300865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.300906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.301146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.301185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.301434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.301477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.301860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.302270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.302310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.302673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.302713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.303069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.303481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.303522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.303895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.303938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.304233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.304273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.304626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.304666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.305087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.305128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.305496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.305536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.305769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.581 [2024-12-09 05:31:36.305809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.581 qpair failed and we were unable to recover it. 00:38:22.581 [2024-12-09 05:31:36.306214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.306255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.306583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.306624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.306952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.306994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.307358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.307398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.307780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.307828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.308199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.308241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.308613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.308654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.309013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.309055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.309393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.309432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.309801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.309850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.310221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.310260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.310598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.310638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.311010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.311052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.311430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.311470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.311845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.311887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.312138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.312182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.312427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.312465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.312855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.312896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.313263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.313303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.313541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.313870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.313912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.314261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.314301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.314645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.314685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.315042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.315084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.315432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.315472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.315742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.315781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.316064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.316107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.316468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.316508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.316853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.316894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.317228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.317268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.317513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.317551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.317904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.317946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.318306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.318347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.318726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.318766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.319159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.319203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.319546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.319586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.320386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.582 [2024-12-09 05:31:36.320427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.582 qpair failed and we were unable to recover it. 00:38:22.582 [2024-12-09 05:31:36.320785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.320833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.321190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.321230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.321599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.321640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.322016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.322057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.322334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.322373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.322753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.322794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.323159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.323200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.323512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.323552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.323922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.323965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.324333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.324374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.324753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.325179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.325232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.325596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.325636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.325918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.325958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.326362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.326401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.326776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.326835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.327243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.327284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.327629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.327669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.328429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.328469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.328697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.328736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.329113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.329161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.329523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.329563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.330015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.330377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.330416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.330682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.330721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.330996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.331038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.331290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.331329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.331690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.331730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.332092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.332134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.332463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.332502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.332873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.332915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.333258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.333298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.333640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.333679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.334031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.334072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.334437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.334480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.334836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.334878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.583 qpair failed and we were unable to recover it. 00:38:22.583 [2024-12-09 05:31:36.335239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.583 [2024-12-09 05:31:36.335279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.335618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.335658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.336010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.336052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.336409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.336448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.336826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.337232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.337271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.337639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.337679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.338065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.338106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.338463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.338502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.338865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.338907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.339250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.339291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.339671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.339712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.340062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.340105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.340483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.340523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.340871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.340911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.341281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.341323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.341675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.341715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.342082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.342132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.342500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.342541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.342907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.342949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.343298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.343713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.343752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.344113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.344496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.344536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.344907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.344954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.345324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.345364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.345720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.345760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.346046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.346087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.346438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.346479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.346849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.346890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.347266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.347306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.347539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.347581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.584 [2024-12-09 05:31:36.347958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.584 [2024-12-09 05:31:36.348000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.584 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.348342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.348382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.348602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.348644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.349023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.349065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.349298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.349338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.349758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.349802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.350159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.350213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.350577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.350617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.350984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.351027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.351402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.351442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.351828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.351870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.352235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.352275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.352550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.352590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.352976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.353018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.353393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.353432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.353769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.354186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.354227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.354581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.354621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.354999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.355042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.355402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.355443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.355737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.356013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.356056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.356454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.356497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.356890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.357295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.357335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.357724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.358163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.358206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.358571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.358611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.358984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.359026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.359447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.359839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.359881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.360261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.360647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.360693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.361043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.361084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.361447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.361487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.361797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.362087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.362339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.362381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.585 [2024-12-09 05:31:36.362750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.585 [2024-12-09 05:31:36.362790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.585 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.363154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.363196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.363463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.363502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.363876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.363918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.364257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.364296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.364537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.364576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.364843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.364885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.365199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.365237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.365599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.365640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.365984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.366026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.366438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.366841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.366883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.367277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.367317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.367600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.367964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.368005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.368424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.368765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.368805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.369152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.369193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.369526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.369567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.369936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.369977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.370347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.370386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.370699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.370740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.371089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.371130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.371503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.371544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.371918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.371961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.372363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.372776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.372826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.373168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.373208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.373464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.373505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.373894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.373936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.374292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.374332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.374696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.374738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.375148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.375202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.375447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.375834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.375888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.376257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.376297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.376655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.376695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.377050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.377092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.586 [2024-12-09 05:31:36.377447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.586 [2024-12-09 05:31:36.377487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.586 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.377867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.377908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.378289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.378329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.378681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.378981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.379022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.379332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.379705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.379744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.379932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.379974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.380337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.380376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.380711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.380751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.381176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.381516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.381556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.381899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.381940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.382269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.382310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.382590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.382630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.382993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.383034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.383413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.383453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.383794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.383850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.384212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.384252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.384662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.384701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.385060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.385102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.385478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.385518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.385891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.385932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.386292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.386332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.386665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.386705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.387068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.387109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.387484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.387524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.387934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.388285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.388324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.388705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.388745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.389113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.389154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.389515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.389555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.389921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.389962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.390312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.390352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.390711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.390751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.390987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.391030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.391420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.391472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.391731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.587 [2024-12-09 05:31:36.391774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.587 qpair failed and we were unable to recover it. 00:38:22.587 [2024-12-09 05:31:36.392176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.392229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.392505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.392547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.392937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.392979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.393333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.393717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.393756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.394125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.394166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.394571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.394941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.394983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.395373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.395412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.395774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.395829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.396165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.396205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.396528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.396568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.396949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.396992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.397365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.397405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.397683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.397723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.398102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.398143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.398499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.398538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.398909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.398951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.399295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.399336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.399706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.399747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.400122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.400177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.400572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.400612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.400963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.401004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.401375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.401415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.401769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.401810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.402108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.402148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.402521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.402561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.402829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.402874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.403269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.403641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.403681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.403937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.403979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.404331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.404372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.404725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.404765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.405141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.405183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.405569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.405610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.405902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.405943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.406215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.406255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.406584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.406624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.588 qpair failed and we were unable to recover it. 00:38:22.588 [2024-12-09 05:31:36.406935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.588 [2024-12-09 05:31:36.406984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.407352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.407392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.407808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.408252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.408293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.408614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.408656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.408968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.409010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.409241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.409626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.409665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.409967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.410355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.410395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.410840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.411207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.411247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.411509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.411549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.411930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.411972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.412352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.412393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.412763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.412802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.413063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.413104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.413444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.413484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.413851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.413892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.414251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.414291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.414672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.414712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.414993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.415035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.415695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.415735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.416111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.416155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.416430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.416470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.416745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.416784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.417096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.417520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.417561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.417906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.417947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.418295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.418335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.418570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.418610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.419003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.419045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.419412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.419452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.589 qpair failed and we were unable to recover it. 00:38:22.589 [2024-12-09 05:31:36.419663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.589 [2024-12-09 05:31:36.419702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.420031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.420072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.420425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.420465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.420808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.420872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.421267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.421308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.421600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.421640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.422027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.422070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.422443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.422484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.422880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.422923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.423282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.423323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.423727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.423902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.423944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.424288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.424384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.424761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.424802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.425188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.425228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.425455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.425495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.425864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.425905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.426230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.426270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.426641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.426681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.427042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.427083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.427326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.427370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.427728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.428144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.428185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.428546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.428586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.428956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.428998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.429376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.429415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.429786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.429837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.430197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.430238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.430615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.430655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.430987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.431030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.431367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.431408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.431639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.431679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.432039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.432083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.432410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.432456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.432677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.432966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.433402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.433442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.433711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.590 [2024-12-09 05:31:36.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.590 qpair failed and we were unable to recover it. 00:38:22.590 [2024-12-09 05:31:36.434122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.434164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.434530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.434570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.434937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.434979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.435335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.435379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.435592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.435632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.435987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.436029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.436391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.436431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.436730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.436770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.437150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.437194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.437551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.437591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.437959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.438001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.438342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.438382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.438750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.438789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.439167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.439209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.439354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.439752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.440011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.440060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.440420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.440460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.440720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.441146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.441187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.441538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.441578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.441946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.441987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.442355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.442395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.442762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.442804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.443054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.443096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.443331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.443374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.443611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.443652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.443894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.443939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.444296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.444336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.444719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.444758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.445039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.445081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.445427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.445467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.445838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.445880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.446232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.446272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.446665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.446705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.447056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.447104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.447451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.447491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.447872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.591 [2024-12-09 05:31:36.447915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-12-09 05:31:36.448204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.448257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.448631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.448672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.449022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.449065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.449436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.449475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.449830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.449872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.450260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.450575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.450614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.451014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.451055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.451441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.451482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.451854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.451896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.452259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.452299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.452552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.452592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.452964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.453006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.453157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.453197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.453552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.453592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.453833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.453875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.454239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.454280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.454538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.454577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.454927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.454970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.455335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.455375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.455750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.455792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.456157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.456198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.456651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.456691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.456932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.456974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.457347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.457388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.457793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.457842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.458226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.458667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.459059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.459101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.459352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.459396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.459774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.459814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.460107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.460148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.460499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.460539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.460884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.460926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.461266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.461306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.461533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.461572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.461809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.461860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-12-09 05:31:36.462252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.592 [2024-12-09 05:31:36.462299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.462663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.462703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.463096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.463138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.463492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.463533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.463907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.463949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.464332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.464750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.464790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.465179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.465220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.465565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.465605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.465967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.466009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.466384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.466423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.466781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.466830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.467202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.467242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.467618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.467657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.468032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.468075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.468340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.468380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.468737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.468777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.469140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.469182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.469546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.469586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.469939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.469981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.470338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.470377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.470725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.470765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.471220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.471262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.471622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.471661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.472027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.472076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.472437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.472477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.472866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.473279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.473693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.473734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.474109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.474151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.474419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.474462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.474842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.475307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.475668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.475708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-12-09 05:31:36.476146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.593 [2024-12-09 05:31:36.476189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.476553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.476593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.476948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.476989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.477357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.477398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.477825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.477867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.478223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.478263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.478597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.478644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.479005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.479047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.479401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.479441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.479700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.479740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.480004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.480045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.480394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.480434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.480807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.480857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.481222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.481264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.481627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.481667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.482030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.482071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.482288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.482331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.482696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.482735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.483066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.483106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.483447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.483487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.483855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.483899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.484251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.484291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.484647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.484687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.484926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.484968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.485319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.485359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.485711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.485751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.486111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.486153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.486493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.486532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.486881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.486922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.487262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.487302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.594 [2024-12-09 05:31:36.487688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.594 qpair failed and we were unable to recover it. 00:38:22.594 [2024-12-09 05:31:36.488099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.488140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.488497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.488537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.488906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.488948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.489318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.489359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.489721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.489761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.490132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.490173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.490446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.490837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.490879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.491267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.491306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.491782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.492138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.492179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.492548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.492588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.492942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.492984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.493359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.493399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.493765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.493804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.494181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.494228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.494597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.494637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.494868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.494909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.495276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.495316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.495658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.495698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.496081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.496122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.496382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.496426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.496788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.496836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.497266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.497306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.497714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.498098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.498389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.498831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.498873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.499236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.499276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.499642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.499682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.595 [2024-12-09 05:31:36.499968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.595 [2024-12-09 05:31:36.500011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.595 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.500427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.500467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.500869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.500911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.501324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.501364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.501723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.501765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.502128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.502169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.502407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.502448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.502840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.502881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.503239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.503278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.503642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.503681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.504043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.504429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.504468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.504842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.504885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.505216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.505256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.505599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.505639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.505990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.506031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.506279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.506325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.506700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.506739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.507146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.507187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.507544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.507584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.507928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.507969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.508311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.508350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.508706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.508746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.509155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.509198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.509558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.509598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.509800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.509871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.510268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.510309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.511053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.511094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.511441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.511481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.511739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.511779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.512158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.512199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.512540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.512580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.512937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.512978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.513335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.513374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.513721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.513760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.514131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.514173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.514425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.514466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.596 qpair failed and we were unable to recover it. 00:38:22.596 [2024-12-09 05:31:36.514834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.596 [2024-12-09 05:31:36.514876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.515244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.515284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.515629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.515669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.516015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.516057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.516416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.516457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.516698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.516738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.517051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.517091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.517447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.517487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.517858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.517900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.518144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.518187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.518566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.518605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.518966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.519008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.519211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.519251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.519621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.519661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.520027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.520069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.520448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.520487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.520866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.520908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.521256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.521296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.521640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.521679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.522042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.522083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.522500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.522876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.522918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.523278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.523330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.523689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.523729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.523923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.523964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.524375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.524745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.524785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.525156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.525203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.525553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.525593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.525878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.525920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.526305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.526346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.526692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.526731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.527137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.527548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.527923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.527965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.528317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.528356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.528730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.528770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.529184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.529225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.597 qpair failed and we were unable to recover it. 00:38:22.597 [2024-12-09 05:31:36.529617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.597 [2024-12-09 05:31:36.529656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.529972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.530014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.530382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.530422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.530786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.530836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.531209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.531250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.531506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.531545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.531916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.531958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.532309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.532349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.532622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.532660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.532829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.532870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.533254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.533294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.533566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.533606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.533969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.534010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.534367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.534407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.534694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.535084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.535126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.535487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.535528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.535907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.535947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.536323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.536362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.536737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.536777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.537196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.537238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.537481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.537521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.537915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.537956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.538316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.538355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.538725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.538764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.539200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.539242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.539598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.539638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.540007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.540412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.540713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.540758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.541123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.541164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.541306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.541348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.541706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.541745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.598 [2024-12-09 05:31:36.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.598 [2024-12-09 05:31:36.542153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.598 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.542556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.542870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.542911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.543294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.543334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.543690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.543730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.544003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.544043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.544420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.544461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.544812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.544862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.545126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.545168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.545489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.545529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.545908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.545951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.546365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.546405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.546764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.546803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.547183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.547224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.547590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.547631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.547975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.548339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.548379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.548729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.548769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.549135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.549176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.549509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.549548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.549873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.549914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.550277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.550317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.550660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.550700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.551074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.551118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.551486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.551525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.551865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.552288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.552328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.552676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.552715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.553074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.553115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.553491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.553531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.553906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.553946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.554406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.554811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.555169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.555208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.555583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.555623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.556004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.556046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.556283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.556332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.556684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.599 [2024-12-09 05:31:36.556724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.599 qpair failed and we were unable to recover it. 00:38:22.599 [2024-12-09 05:31:36.557093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.557134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.557502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.557542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.557911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.557952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.558296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.558336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.558662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.558702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.559018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.559358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.559398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.559743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.559782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.560158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.560198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.560551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.560591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.561022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.561063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.561423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.561462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.561721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.561764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.600 [2024-12-09 05:31:36.562150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.600 [2024-12-09 05:31:36.562192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.600 qpair failed and we were unable to recover it. 00:38:22.865 [2024-12-09 05:31:36.562562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.865 [2024-12-09 05:31:36.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.865 qpair failed and we were unable to recover it. 00:38:22.865 [2024-12-09 05:31:36.562853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.865 [2024-12-09 05:31:36.562897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.865 qpair failed and we were unable to recover it. 00:38:22.865 [2024-12-09 05:31:36.563293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.865 [2024-12-09 05:31:36.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.865 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.563690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.563729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.564120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.564163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.564517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.564557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.564902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.565279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.565319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.565677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.565716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.565994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.566035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.566423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.566462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.566882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.567140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.567180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.567557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.567596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.568054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.568395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.568435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.568799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.568848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.569191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.569231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.569610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.569650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.569937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.569978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.570355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.570748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.570788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.571222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.571264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.571636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.571675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.572029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.572076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.572450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.572491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.572863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.572906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.573280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.573333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.573709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.573748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.574116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.574157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.574527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.574567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.574928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.574969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.575310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.575349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.575681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.575721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.575956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.575999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.576325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.576365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.576707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.576747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.577121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.577162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.577544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.577584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.577951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.866 [2024-12-09 05:31:36.577992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.866 qpair failed and we were unable to recover it. 00:38:22.866 [2024-12-09 05:31:36.578330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.578369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.578722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.578763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.579113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.579155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.579540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.579579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.579956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.580308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.580348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.580712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.581096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.581139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.581372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.581768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.582181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.582221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.582572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.582613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.582904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.583292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.583332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.583701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.583740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.584110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.584151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.584552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.584592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.584951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.585323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.585728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.586140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.586181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.586545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.586585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.586949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.586990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.587353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.587394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.587768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.587814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.588211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.588251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.588616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.588656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.589061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.589476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.589849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.590273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.590313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.590567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.590610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.590989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.591030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.591373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.591848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.591890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.592254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.592295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.592650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.867 [2024-12-09 05:31:36.592947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.867 [2024-12-09 05:31:36.592987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.867 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.593265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.593665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.593705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.594062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.594493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.594533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.594904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.594946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.595323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.595363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.595738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.595778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.596142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.596184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.596554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.596594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.596854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.596895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.597324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.597364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.597648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.598020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.598063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.598341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.598793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.599148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.599189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.599524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.599565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.599883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.599940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.600299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.600339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.600713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.600753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.601121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.601162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.601578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.601937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.601978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.602240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.602279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.602668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.603065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.603106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.603361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.603407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.603592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.603634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.603889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.603930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.604295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.604335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.604694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.604734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.605086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.605128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.605486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.605525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.868 [2024-12-09 05:31:36.605899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.868 [2024-12-09 05:31:36.605941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.868 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.606354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.606394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.606759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.606799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.607156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.607196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.607571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.607611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.607990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.608030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.608342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.608382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.608740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.608779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.609143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.609184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.609560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.609600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.609976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.610017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.610265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.610306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.610742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.610781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.611145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.611185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.611467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.611887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.611929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.612310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.612349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.612763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.612802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.613129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.613170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.613476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.613515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.613881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.613929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.614340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.614758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.614797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.615136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.615176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.615532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.615573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.615939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.615980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.616317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.616357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.616724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.616763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.617119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.617160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.617403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.617445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.617840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.618197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.618238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.618579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.618619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.619059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.619099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.619467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.619507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.619884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.619926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.620268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.620308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.869 qpair failed and we were unable to recover it. 00:38:22.869 [2024-12-09 05:31:36.620721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.869 [2024-12-09 05:31:36.620761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.621021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.621061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.621414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.621454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.621808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.621858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.622247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.622605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.622644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.623100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.623146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.623494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.623547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.623811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.623863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.624235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.624276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.624647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.624688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.625040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.625087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.625441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.625482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.625809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.625873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.626193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.626233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.626568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.626608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.626865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.626908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.627263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.627303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.627641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.627681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.628034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.628076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.628453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.628494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.628894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.628935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.629301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.629341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.629700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.629746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.630112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.630153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.630571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.630611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.631012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.631379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.631420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.631795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.631856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.632219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.632259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.632616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.632655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.632914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.632958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.870 qpair failed and we were unable to recover it. 00:38:22.870 [2024-12-09 05:31:36.633367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.870 [2024-12-09 05:31:36.633407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.633766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.633805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.634164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.634204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.634580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.634619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.634998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.635040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.635458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.635499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.635747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.635789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.636165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.636207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.636578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.636618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.636946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.636988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.637330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.637370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.637723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.637763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.638120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.638160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.638531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.638571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.638943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.638985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.639380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.639757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.639798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.640217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.640563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.640604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.640966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.641007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.641449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.641489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.641830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.642222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.642262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.642635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.642674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.643031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.643072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.643420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.643460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.643828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.644197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.644237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.644581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.644622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.644976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.645018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.645394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.645433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.645840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.645888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.646252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.646291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.646653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.646693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.647032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.647074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.647412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.647451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.647836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.647878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.648265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.871 [2024-12-09 05:31:36.648304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.871 qpair failed and we were unable to recover it. 00:38:22.871 [2024-12-09 05:31:36.648677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.648719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.649097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.649151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.649515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.649555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.649928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.649968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.650317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.650358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.650699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.650739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.651028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.651068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.651376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.651649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.651688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.652099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.652139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.652509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.652550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.652789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.652842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.653193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.653234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.653573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.653612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.654029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.654069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.654415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.654463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.654835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.655127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.655165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.655549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.655843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.655886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.656265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.656306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.656595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.656988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.657029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.657420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.657461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.657837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.657879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.658156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.658194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.658564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.658604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.658977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.659018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.659389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.659429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.659800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.659853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.660204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.660244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.660593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.660632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.661001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.661042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.661211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.661257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.661493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.661533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.661929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.661970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.662310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.662350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.662720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.872 [2024-12-09 05:31:36.662760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.872 qpair failed and we were unable to recover it. 00:38:22.872 [2024-12-09 05:31:36.663137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.663178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.663539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.663580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.663943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.663986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.664366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.664407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.664773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.665175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.665215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.665574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.665613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.665967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.666007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.666435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.666474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.666851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.666893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.667291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.667666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.667705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.668099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.668141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.668550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.668590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.668943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.668984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.669322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.669362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.669599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.669641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.670034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.670076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.670530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.670570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.670939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.670980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.671357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.671396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.671731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.671770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.672152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.672195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.672440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.672483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.672830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.672872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.673261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.673676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.673718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.674055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.674110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.674439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.674479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.674911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.674951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.675178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.873 [2024-12-09 05:31:36.675218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.873 qpair failed and we were unable to recover it. 00:38:22.873 [2024-12-09 05:31:36.675571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.675611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.675934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.675974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.676207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.676251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.676640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.676680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.676954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.677002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.677390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.677430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.677798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.677849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.678128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.678169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.678541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.678581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.678931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.678970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.679318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.679358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.679631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.679671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.680072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.680121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.680491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.680531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.680894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.680937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.681285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.681324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.681691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.681733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.682158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.682527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.682567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.682929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.682971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.683335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.683376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.683763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.683802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.684192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.684232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.684464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.684507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.684898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.684939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.685287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.685327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.874 [2024-12-09 05:31:36.685702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.874 [2024-12-09 05:31:36.685742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.874 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.686097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.686138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.686493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.686534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.686895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.687292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.687331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.687741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.687781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.688165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.688206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.688565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.688605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.688961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.689003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.689398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.689750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.689799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.690175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.690217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.690678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.691134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.691175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.691546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.691585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.691933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.691973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.692259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.692678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.692717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.693086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.693139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.693513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.693553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.693893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.693935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.694294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.694333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.694705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.694745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.695021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.695063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.875 qpair failed and we were unable to recover it. 00:38:22.875 [2024-12-09 05:31:36.695450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.875 [2024-12-09 05:31:36.695489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.695859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.695902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.696260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.696577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.696616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.696997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.697388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.697427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.697780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.697829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.698198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.698238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.698581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.698622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.699004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.699046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.699413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.699466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.699837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.699878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.700187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.700227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.700617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.700656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.701016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.701410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.701450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.701802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.701851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.702112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.702156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.702510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.702551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.702919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.702961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.703314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.703355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.703623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.703667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.704083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.704125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.704465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.704505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.704742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.704785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.705175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.876 [2024-12-09 05:31:36.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.876 qpair failed and we were unable to recover it. 00:38:22.876 [2024-12-09 05:31:36.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.705624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.705993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.706035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.706398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.706439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.706831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.706872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.707270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.707311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.707690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.707730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.708109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.708150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.708510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.708550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.708754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.708801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.709140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.709180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.709556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.709595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.709965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.710006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.710376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.710416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.710700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.711075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.711116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.711468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.711508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.711965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.712244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.712282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.712650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.712689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.713055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.713097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.713441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.713480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.713758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.713797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.714227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.714268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.714638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.714676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.715035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.715076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.715321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.877 [2024-12-09 05:31:36.715366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.877 qpair failed and we were unable to recover it. 00:38:22.877 [2024-12-09 05:31:36.715713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.715753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.716132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.716174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.716544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.716584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.716831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.716875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.717231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.717271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.717504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.717547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.717963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.718350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.718390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.718759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.719176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.719218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.719558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.719599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.719973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.720420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.720460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.720839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.720881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.721283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.721323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.721692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.721731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.721998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.722043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.722441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.722481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.722847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.722889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.723256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.723296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.723574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.723613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.723871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.723917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.724292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.724351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.724529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.724569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.724804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.724854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.725228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.725269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.878 qpair failed and we were unable to recover it. 00:38:22.878 [2024-12-09 05:31:36.725638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.878 [2024-12-09 05:31:36.725677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.726042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.726083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.726421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.726462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.726758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.727188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.727550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.727590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.727963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.728006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.728268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.728308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.728696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.728735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.729112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.729155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.729530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.729571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.729940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.730349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.730389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.730756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.730796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.731171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.731211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.731432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.731476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.731704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.731745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.732119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.732161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.732512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.732551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.732791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.732846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.733100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.733140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.733512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.733553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.733892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.734181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.734222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.734585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.734624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.734997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.735038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.879 [2024-12-09 05:31:36.735307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.879 [2024-12-09 05:31:36.735351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.879 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.735773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.735812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.736181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.736221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.736582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.736622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.736990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.737031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.737427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.737467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.737835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.737877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.738255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.738295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.738655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.738695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.739058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.739099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.739508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.739593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.739923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.739965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.740339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.740379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.740718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.741138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.741179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.741594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.741932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.741974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.742245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.742285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.742674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.742713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.743063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.743105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.743475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.743515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.743890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.743932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.744292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.744331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.744681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.744721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.745075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.745117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.745489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.745529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.745900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.880 [2024-12-09 05:31:36.745942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.880 qpair failed and we were unable to recover it. 00:38:22.880 [2024-12-09 05:31:36.746304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.746343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.746700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.747107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.747148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.747492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.747531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.747796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.747857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.748220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.748260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.748689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.748732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.749108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.749164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.749511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.749551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.749886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.749928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.750274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.750315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.750701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.750739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.751092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.751133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.751946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.751987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.752347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.752387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.752754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.752794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.753146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.753187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.753553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.753593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.753957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.753999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.754367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.754406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.754660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.754700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.755107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.755148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.755385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.755434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.755795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.755845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.756102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.881 [2024-12-09 05:31:36.756142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.881 qpair failed and we were unable to recover it. 00:38:22.881 [2024-12-09 05:31:36.756518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.756558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.756934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.756977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.757348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.757387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.757738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.757778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.758199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.758540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.758580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.758924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.758966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.759365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.759405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.759836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.760175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.760215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.760588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.760628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.760873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.760916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.761278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.761583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.761622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.761890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.761935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.762274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.762314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.762654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.762693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.763022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.763063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.763421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.763461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.763840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.763881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.764280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.764322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.764688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.764728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.765092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.765133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.882 qpair failed and we were unable to recover it. 00:38:22.882 [2024-12-09 05:31:36.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.882 [2024-12-09 05:31:36.765528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.765790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.765848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.766204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.766244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.766538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.766903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.766944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.767301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.767340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.767695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.767734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.768103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.768144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.768514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.768554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.768912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.768953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.769268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.769521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.769560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.769918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.769960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.770377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.770416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.770783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.770838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.771186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.771225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.771579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.771620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.771980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.772022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.772369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.772409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.772764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.772803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.773170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.773210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.773570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.773610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.773970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.774025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.774474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.774513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.774841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.774883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.775121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.775164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.775405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.775447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.883 [2024-12-09 05:31:36.775809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.883 [2024-12-09 05:31:36.775860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.883 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.776252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.776293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.776640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.776680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.777018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.777060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.777387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.777427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.777794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.777842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.778183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.778223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.778600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.778640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.779013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.779053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.779418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.779772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.779812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.780183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.780223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.780592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.780632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.780874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.780918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.781285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.781326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.781681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.781722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.781895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.781939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.782295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.782335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.782693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.782733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.783111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.783153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.783520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.783560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.783926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.783969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.784345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.784386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.784724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.784764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.785145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.785187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.785484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.785524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.785888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.785931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.786283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.884 [2024-12-09 05:31:36.786331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.884 qpair failed and we were unable to recover it. 00:38:22.884 [2024-12-09 05:31:36.786667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.786707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.786986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.787028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.787398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.787439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.787840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.787882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.788254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.788663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.788703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.789090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.789139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.789529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.789570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.789956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.789998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.790380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.790421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.790786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.790834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.791195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.791235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.791599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.791640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.792014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.792056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.792404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.792444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.792865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.793279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.793320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.793735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.794114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.794158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.794472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.794511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.794877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.794920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.795155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.795199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.795564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.795605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.795975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.796017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.796421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.796789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.885 [2024-12-09 05:31:36.797197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.885 [2024-12-09 05:31:36.797239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.885 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.797604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.797644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.797988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.798031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.798382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.798424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.798789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.798841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.799209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.799264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.799593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.799634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.799963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.800005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.800330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.800370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.800701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.800742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.801117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.801502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.801543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.801912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.801955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.802326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.802745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.802786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.803166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.803209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.803579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.803621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.803957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.803999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.804365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.804406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.804780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.804829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.805091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.805134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.805527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.805937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.805979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.806341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.806381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.806747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.806788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.807170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.807581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.807622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.808006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.808048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.808386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.808427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.808648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.808690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.809061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.809102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.809470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.809511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.809863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.809907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.810296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.886 [2024-12-09 05:31:36.810337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.886 qpair failed and we were unable to recover it. 00:38:22.886 [2024-12-09 05:31:36.810747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.811161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.811204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.811567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.811607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.811847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.811892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.812243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.812285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.812523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.812566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.812921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.812964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.813306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.813347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.813714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.813754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.814132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.814175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.814432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.814476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.814863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.814906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.815243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.815283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.815611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.815653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.816019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.816061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.816414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.816689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.816734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.817092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.817135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.817522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.817562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.817922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.817965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.818223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.818265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.818639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.818680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.819050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.819092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.819524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.819915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.819958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.820315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.820356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.820729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.820770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.821150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.821193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.821603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.821643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.822030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.822072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.822415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.822456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.822830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.822873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.823158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.823536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.823577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.823945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.823988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.824338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.824431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.824798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.824850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.825098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.887 [2024-12-09 05:31:36.825142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.887 qpair failed and we were unable to recover it. 00:38:22.887 [2024-12-09 05:31:36.825398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.825442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.825835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.826204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.826584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.826624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.827039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.827082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.827450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.827490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.827863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.828258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.828299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.828669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.828717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.829097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.829139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.829505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.829546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.829920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.829962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.830359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.830400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.830776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.830849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.831107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.831147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.831505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.831546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.831912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.831955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.832210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.832252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.832638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.832678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.833032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.833074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.833440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.833481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.833847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.833889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.834246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.834287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.834656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.834697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.835053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.835096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.835437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.835478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.835855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.835898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.836263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.836305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.836713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.837095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.837136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.837501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.837542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.837927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.888 qpair failed and we were unable to recover it. 00:38:22.888 [2024-12-09 05:31:36.838335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.888 [2024-12-09 05:31:36.838375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.838741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.838782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.839161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.839203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.839596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.839637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.840007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.840057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.840432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.840473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.840814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.840866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.841267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.841308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.841689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.841730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.842119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.842161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.842527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.842568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.842940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.842983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.843357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.843397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.843763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.843803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.844177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.844218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.844561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.844601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.844974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.845022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.845393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.845433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.845803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.846227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.846269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.846636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.846676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.847064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.847106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.847489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.847529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.847854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.848250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.848289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.848659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.848699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.849105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.849477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.849890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.849934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.850289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.850343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.850717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.850760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.851136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.851178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.851569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.851610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.851937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.851979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.852342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.852383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.852712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.852752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:22.889 [2024-12-09 05:31:36.853123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.889 [2024-12-09 05:31:36.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:22.889 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.853514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.853556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.853883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.853924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.854283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.854322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.854687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.854726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.855104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.855147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.855379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.855814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.855868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.856247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.856613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.856654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.857068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.857111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.857475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.857516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.857925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.858217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.858263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.858620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.858661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.858992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.859035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.859388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.859429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.859790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.859839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.860083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.860127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.860470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.860512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.860877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.860925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.861287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.861328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.861693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.861733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.862109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.862152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.160 [2024-12-09 05:31:36.862498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.160 [2024-12-09 05:31:36.862539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.160 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.862908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.862951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.863315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.863355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.863724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.863765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.864141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.864184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.864548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.864589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.864956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.864997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.865264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.865308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.865663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.865703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.866104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.866146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.866518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.866558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.866930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.866974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.867338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.867380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.867746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.867786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.868164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.868205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.868576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.868617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.868989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.869029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.869375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.869414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.869761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.869802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.870183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.870224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.870604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.870645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.871013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.871057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.871424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.871464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.871834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.871877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.872212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.872253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.872620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.872660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.873022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.873064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.873263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.873302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.873653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.873693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.874060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.874101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.874469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.874508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.874871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.874914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.875324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.875369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.875761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.875814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.876201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.876242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.876612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.876653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.877026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.877074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.877441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.877482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.161 [2024-12-09 05:31:36.877851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.161 [2024-12-09 05:31:36.877892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.161 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.878267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.878308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.878690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.878730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.879184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.879226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.879567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.879608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.879974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.880017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.880391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.880431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.880767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.880807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.881036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.881081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.881450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.881490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.881809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.881860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.882144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.882542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.882583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.882962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.883005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.883346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.883758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.883800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.884172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.884214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.884582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.884623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.884998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.885039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.885278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.885321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.885689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.885730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.886124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.886166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.886533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.886574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.886944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.886986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.887320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.887361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.887735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.887776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.888054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.888103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.888476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.888516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.888889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.888931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.889299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.889340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.889703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.889743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.890123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.890165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.890424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.890465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.890809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.890877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.891270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.891311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.891693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.891733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.162 [2024-12-09 05:31:36.892105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.162 [2024-12-09 05:31:36.892150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.162 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.892526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.892567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.892917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.892965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.893380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.893420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.893744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.893785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.894160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.894202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.894576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.894914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.894956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.895323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.895364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.895719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.895760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.896015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.896057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.896440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.896756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.896796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.897238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.897280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.897646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.897687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.898050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.898091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.898435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.898476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.898853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.898897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.899307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.899577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.899618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.899997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.900039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.900396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.900436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.900806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.900859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.901251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.901310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.901543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.901584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.901853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.901896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.902290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.902331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.902728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.902768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.903047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.903089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.903357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.903396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.903713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.903753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.904134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.904177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.904404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.904442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.904826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.904868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.905235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.905275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.905650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.905690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.905987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.906029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.906387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.906427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.163 qpair failed and we were unable to recover it. 00:38:23.163 [2024-12-09 05:31:36.906774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.163 [2024-12-09 05:31:36.906823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.907197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.907238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.907625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.907666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.908030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.908073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.908437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.908483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.908845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.908889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.909238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.909279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.909521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.909565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.909938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.909981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.910335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.910375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.910758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.910799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.911195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.911619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.911660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.911909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.911951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.912335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.912376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.912803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.912855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.913147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.913511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.913552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.913944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.913987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.914335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.914377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.914743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.914783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.915170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.915213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.915489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.915530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.915881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.916072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.916115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.916509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.916886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.916930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.917323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.917364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.917731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.917771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.918151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.918192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.918533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.918575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.918872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.919303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.919344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.164 [2024-12-09 05:31:36.919661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.164 [2024-12-09 05:31:36.919702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.164 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.919898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.919942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.920224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.920264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.920633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.920673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.921028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.921071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.921433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.921475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.921867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.921908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.922295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.922336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.922681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.922721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.922959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.923002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.923365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.923406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.923766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.923826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.924088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.924129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.924528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.924570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.924934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.924978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.925342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.925384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.925656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.925710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.925946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.925989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.926358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.926400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.926661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.926702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.927062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.927105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.927491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.927533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.927931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.928315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.928355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.928744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.928785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.929173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.929215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.929531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.929573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.929854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.929896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.930251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.930293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.930648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.930690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.931051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.931093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.931296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.931659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.931700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.932054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.932095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.932482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.932523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.165 [2024-12-09 05:31:36.932802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.165 qpair failed and we were unable to recover it. 00:38:23.165 [2024-12-09 05:31:36.933201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.933246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.933618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.933659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.934027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.934407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.934447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.934791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.935217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.935258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.935627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.936035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.936078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.936424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.936465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.936837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.936880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.937271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.937312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.937586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.937629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.938008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.938050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.938406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.938448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.938800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.939218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.939264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.939592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.939634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.939913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.940295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.940336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.940616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.940658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.941013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.941056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.941395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.941437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.941811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.941862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.942228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.942268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.942619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.942661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.943027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.943069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.943300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.943344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.943713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.943753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.944145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.944188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.944564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.944605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.944845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.944887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.945183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.945574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.945615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.945958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.945999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.946254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.946294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.946684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.946724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.946958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.166 [2024-12-09 05:31:36.946999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.166 qpair failed and we were unable to recover it. 00:38:23.166 [2024-12-09 05:31:36.947234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.947274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.947630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.947671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.948040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.948082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.948463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.948503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.948774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.949199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.949241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.949507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.949551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.949837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.949892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.950282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.950324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.950598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.950638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.951034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.951365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.951406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.951762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.951803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.952181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.952222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.952604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.952645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.952877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.952921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.953192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.953235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.953499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.953541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.953915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.953963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.954404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.954444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.954813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.954865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.955210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.955251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.955570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.955612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.955867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.955910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.956281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.956321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.956708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.956749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.957008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.957051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.957326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.957371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.957723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.957764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.958150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.958192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.958609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.959025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.959070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.959429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.959470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.959848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.959892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.960284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.960325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.960731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.960771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.961156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.961198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.167 qpair failed and we were unable to recover it. 00:38:23.167 [2024-12-09 05:31:36.961582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.167 [2024-12-09 05:31:36.961624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.961998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.962040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.962411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.962452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.962858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.963225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.963266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.963640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.963681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.964034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.964077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.964457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.964498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.964738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.964778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.965157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.965199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.965398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.965438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.965691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.965733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.966161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.966570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.966919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.966960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.967385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.967425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.967795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.967856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.968230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.968271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.968697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.968737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.969074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.969115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.969496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.969537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.969908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.969957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.970309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.970349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.970679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.970720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.971129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.971557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.971598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.971942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.971984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.972337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.972378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.972717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.972758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.973113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.973156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.973499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.973539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.973906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.974315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.974357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.974674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.974716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.975070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.975127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.975461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.975503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.975836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.975878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.168 [2024-12-09 05:31:36.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.168 [2024-12-09 05:31:36.976272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.168 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.976508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.976553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.976929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.976971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.977337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.977378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.977762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.977803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.978154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.978195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.978563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.978603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.978971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.979014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.979400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.979440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.979808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.979860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.980274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.980651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.980692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.981052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.981094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.981337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.981380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.981744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.981786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.982162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.982206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.982585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.982627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.982967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.983011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.983379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.983420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.983683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.983722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.984076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.984118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.984508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.984548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.984951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.985285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.985325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.985682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.985728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.986126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.986493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.986533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.986791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.986845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.987235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.987277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.987631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.987673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.988053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.988096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.988483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.988524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.988786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.988836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.169 qpair failed and we were unable to recover it. 00:38:23.169 [2024-12-09 05:31:36.989196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.169 [2024-12-09 05:31:36.989237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.989606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.989647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.990025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.990068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.990436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.990477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.990845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.991292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.991664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.991705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.992057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.992424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.992466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.992707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.992752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.993123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.993167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.993534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.993575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.993972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.994199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.994242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.994626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.994666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.994907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.994950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.995320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.995360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.995731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.995771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.996169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.996536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.996577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.996992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.997034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.997406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.997447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.997814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.997866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.998251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.998292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.998608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.998648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.999029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.999071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.999458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.999499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:36.999872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:36.999915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.000149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.000202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.000550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.000592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.000920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.000963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.001326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.001373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.001699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.002119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.002589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.002630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.002998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.170 [2024-12-09 05:31:37.003040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.170 qpair failed and we were unable to recover it. 00:38:23.170 [2024-12-09 05:31:37.003378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.003419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.003789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.003839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.004235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.004276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.004640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.004681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.004906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.004951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.005294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.005336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.005703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.006027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.006074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.006447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.006490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.006912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.007268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.007309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.007684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.007725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.008001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.008044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.008410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.008449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.008830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.008873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.009231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.009273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.009638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.009679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.010044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.010087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.010467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.010511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.010880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.010922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.011290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.011333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.011701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.011742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.011986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.012032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.012411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.012452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.012835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.012879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.013195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.013534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.013575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.013802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.013854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.014218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.014259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.014624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.014664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.015027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.015070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.015444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.015485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.015836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.015892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.016283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.016660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.016702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.017062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.017105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.171 [2024-12-09 05:31:37.017481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.171 [2024-12-09 05:31:37.017522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.171 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.017863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.017906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.018274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.018315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.018680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.018721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.018975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.019020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.019395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.019436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.019743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.020109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.020152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.020413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.020453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.020841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.020882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.021253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.021294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.021665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.021706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.022070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.022112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.022488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.022530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.022901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.022943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.023320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.023361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.023729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.023770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.024122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.024164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.024533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.024574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.025039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.025457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.025512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.025782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.025848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.026197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.026237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.026602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.026643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.026994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.027037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.027404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.027444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.027802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.027860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.028236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.028277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.028646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.028687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.029056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.029098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.029336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.029381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.029775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.029833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.030192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.030233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.030564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.030608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.030989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.031032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.031369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.031411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.172 [2024-12-09 05:31:37.031785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.172 qpair failed and we were unable to recover it. 00:38:23.172 [2024-12-09 05:31:37.032172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.032215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.032586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.032626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.032983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.033025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.033405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.033448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.033840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.033884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.034154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.034197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.034584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.034623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.034994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.035036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.035403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.035444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.035856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.035900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.036268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.036637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.036679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.037034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.037076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.037447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.037488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.037671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.037713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.038070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.038111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.038489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.038530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.038901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.038943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.039330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.039371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.039739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.039780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.040162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.040446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.040490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.041263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.041303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.041670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.041712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.041972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.042015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.042415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.042456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.042831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.042873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.043145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.043186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.043604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.043650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.044017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.044060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.044444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.044484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.044800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.044853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.045239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.045280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.045649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.046050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.046092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.046500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.173 [2024-12-09 05:31:37.046852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.173 [2024-12-09 05:31:37.046894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.173 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.047263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.047303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.047681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.047721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.048095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.048137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.048507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.048549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.048963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.049286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.049327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.049663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.049704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.050066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.050108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.050466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.050508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.050881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.050937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.051300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.051341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.051601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.051644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.052030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.052072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.052322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.052362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.052634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.053002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.053044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.053414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.053454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.053726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.053765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.054158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.054201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.054477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.054518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.054757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.054801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.055225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.055591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.055633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.055966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.056009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.056374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.056415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.056784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.057177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.057218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.057579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.057620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.057987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.058028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.058396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.058438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.058807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.058859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.059212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.174 [2024-12-09 05:31:37.059260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.174 qpair failed and we were unable to recover it. 00:38:23.174 [2024-12-09 05:31:37.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.059615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.060004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.060047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.060316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.060361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.060748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.060789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.061171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.061212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.061583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.061623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.061970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.062012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.062382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.062423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.062792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.062844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.063226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.063267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.063543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.063583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.063925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.063968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.064331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.064372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.064720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.064761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.065178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.065221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.065567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.066023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.066389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.066430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.066797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.066850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.067248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.067289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.067655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.067980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.068022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.068373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.068413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.068780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.068829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.069196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.069581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.069622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.069964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.070008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.070418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.070785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.070835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.071223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.071263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.071673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.072066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.072110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.072492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.072532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.072907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.072950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.073327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.073367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.073737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.073777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.074149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.175 [2024-12-09 05:31:37.074192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.175 qpair failed and we were unable to recover it. 00:38:23.175 [2024-12-09 05:31:37.074558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.074599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.074958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.075002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.075352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.075399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.075783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.075841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.076199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.076253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.076621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.076661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.077037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.077408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.077449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.077827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.077870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.078244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.078283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.078582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.078623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.078982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.079024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.079374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.079415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.079759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.079800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.080173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.080216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.080580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.080621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.080945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.080988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.081358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.081399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.081769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.081809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.082181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.082223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.082592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.082633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.083000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.083043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.083432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.083473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.083843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.083885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.084258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.084300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.084672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.084713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.085084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.085125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.085499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.085540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.085917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.085959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.086333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.086374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.086790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.086841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.087296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.087337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.087653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.087694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.088052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.088095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.088433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.088474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.088846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.088888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.089266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.176 [2024-12-09 05:31:37.089307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.176 qpair failed and we were unable to recover it. 00:38:23.176 [2024-12-09 05:31:37.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.089761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.090143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.090185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.090549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.090591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.090945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.090988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.091376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.091425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.091791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.091856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.092145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.092186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.092553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.092593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.092962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.093647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.093691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.094050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.094091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.094458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.094498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.094864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.094906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.095137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.095181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.095551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.095591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.095961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.096011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.096384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.096426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.096792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.096844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.097261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.097302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.097579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.097996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.098038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.098375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.098415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.098654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.098985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.099026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.099399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.099440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.099778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.099828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.100191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.100232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.100645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.100686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.100959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.101002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.101399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.101441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.101825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.101881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.102272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.102313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.102730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.103118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.103160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.103527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.103567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.103931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.103975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.177 qpair failed and we were unable to recover it. 00:38:23.177 [2024-12-09 05:31:37.104234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.177 [2024-12-09 05:31:37.104276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.104663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.104703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.105063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.105106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.105476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.105517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.105887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.105929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.106295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.106335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.106709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.106750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.107187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.107229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.107619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.107666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.108028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.108466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.108506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.108873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.109283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.109324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.109694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.109735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.110109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.110475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.110516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.110884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.110926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.111174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.111214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.111568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.111609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.111976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.112017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.112389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.112429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.112796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.112846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.113232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.113273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.113638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.113679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.113999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.114040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.114390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.114799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.114850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.115225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.115266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.115675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.115935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.115977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.116362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.116403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.116811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.117177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.117218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.117632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.117967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.118010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.118286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.118328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.178 [2024-12-09 05:31:37.118712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.178 [2024-12-09 05:31:37.118754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.178 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.119137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.119180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.119549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.119589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.119980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.120023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.120388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.120428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.120761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.120801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.121038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.121449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.121489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.121875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.122248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.122289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.122654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.122695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.123079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.123121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.123467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.123514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.123886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.123928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.124299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.124339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.124608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.124652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.124994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.125040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.125366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.125407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.125760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.125800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.126164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.126205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.126549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.126589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.126963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.127007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.127353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.127409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.127785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.127848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.128276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.128319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.128726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.129103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.129146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.129516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1830546 Killed "${NVMF_APP[@]}" "$@" 00:38:23.179 [2024-12-09 05:31:37.129556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.129900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.129942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 [2024-12-09 05:31:37.130186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.130231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:23.179 [2024-12-09 05:31:37.130497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.130538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:23.179 [2024-12-09 05:31:37.130963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 [2024-12-09 05:31:37.131006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.179 qpair failed and we were unable to recover it. 00:38:23.179 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.179 [2024-12-09 05:31:37.131376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.179 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.180 [2024-12-09 05:31:37.131418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.180 [2024-12-09 05:31:37.131811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.131871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.132228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.132269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.132640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.132681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.133044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.133093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.133464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.133505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.133888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.133931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.134320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.134361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.134744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.134787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.135172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.135215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.135578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.135620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.135982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.136025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.136399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.136439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.136798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.136851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.137225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.137266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.137615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.137655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.138041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.138391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.138432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.138861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.138904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.139164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.139210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1831455 00:38:23.180 [2024-12-09 05:31:37.139585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.139628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1831455 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:23.180 [2024-12-09 05:31:37.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.140025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1831455 ']' 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.180 [2024-12-09 05:31:37.140381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.180 [2024-12-09 05:31:37.140800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.140852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.180 [2024-12-09 05:31:37.141092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.141133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.180 [2024-12-09 05:31:37.141502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.141543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.141923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.141974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.180 [2024-12-09 05:31:37.142340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.180 [2024-12-09 05:31:37.142390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.180 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.142767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.142810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.143192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.143235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.143593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.143637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.143879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.143923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.144288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.144331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.144731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.144773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.145065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.145110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.145484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.145528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.453 qpair failed and we were unable to recover it. 00:38:23.453 [2024-12-09 05:31:37.145914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.453 [2024-12-09 05:31:37.145957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.146324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.146365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.146647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.146689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.147048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.147090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.147511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.147553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.147940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.147984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.148341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.148383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.148846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.148888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.149237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.149280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.149662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.149704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.149975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.150020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.150400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.150441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.150776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.150827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.151096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.151142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.151485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.151526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.151890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.151934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.152302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.152769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.153102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.153157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.153548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.153590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.153940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.153983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.154335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.154377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.154739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.154780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.155174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.155216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.155597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.155639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.156029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.156072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.156354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.156394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.156740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.156782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.157241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.157286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.157656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.157696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.158068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.158118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.158507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.158851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.158894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.159265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.159307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.159656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.159700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.160117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.160160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.160537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.454 [2024-12-09 05:31:37.160578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.454 qpair failed and we were unable to recover it. 00:38:23.454 [2024-12-09 05:31:37.160958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.161010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.161374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.161415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.161691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.161736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.162041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.162083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.162439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.162480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.162868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.162913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.163320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.163362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.163724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.163764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.164205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.164248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.164632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.164673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.164903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.164945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.165313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.165354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.165681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.165722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.165909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.165955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.166353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.166395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.166877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.167131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.167175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.167431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.167476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.167905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.167950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.168353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.168395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.168774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.168838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.169219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.169261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.169631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.169672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.169967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.170013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.170397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.170438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.170844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.171132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.171174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.171592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.171995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.172038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.172410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.172452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.455 [2024-12-09 05:31:37.172839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.455 [2024-12-09 05:31:37.172881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.455 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.173248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.173289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.173716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.174064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.174115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.174469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.174510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.174759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.174800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.175070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.175111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.175459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.175741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.175786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.176123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.176491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.176533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.176778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.176845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.176991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.177031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.177412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.177452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.177838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.177881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.178260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.178314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.178574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.178618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.178763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.178805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.179145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.179542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.179583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.179800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.179856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.180246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.180289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.180672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.180713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.181061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.181104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.181350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.181391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.181743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.181784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.182254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.182297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.182597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.182637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.183077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.183528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.183570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.183842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.184268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.184311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.184693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.184735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.184982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.185024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.185294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.456 [2024-12-09 05:31:37.185602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.456 [2024-12-09 05:31:37.185646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.456 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.185995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.186037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.186281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.186322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.186704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.186746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.187130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.187173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.187527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.187966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.188335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.188375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.188601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.188649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.189039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.189083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.189452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.189493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.189854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.189896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.190316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.190356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.190741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.190782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.191260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.191483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.191525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.191896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.191938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.192300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.192340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.192688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.192730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.192997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.193043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.193442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.193482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.193865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.193907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.194283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.194326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.194582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.194622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.194982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.195023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.195395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.195436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.195837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.195880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.196280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.196321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.196705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.196747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.197163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.197205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.197569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.197610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.197875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.197917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.198364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.198404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.198831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.199065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.199106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.199511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.199968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.457 [2024-12-09 05:31:37.200010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.457 qpair failed and we were unable to recover it. 00:38:23.457 [2024-12-09 05:31:37.200339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.200380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.200788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.200840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.201298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.201338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.201728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.201769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.202183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.202225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.202492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.202535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.202935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.202979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.203296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.203730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.203772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.204059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.204102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.204363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.204405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.204702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.204748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.205010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.205055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.205450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.205492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.205873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.205916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.206302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.206343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.206585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.206630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.207009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.207052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.207393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.207434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.207711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.208225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.208509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.208555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.208938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.208980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.209361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.209401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.209772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.209813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.210226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.210275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.210650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.210691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.210989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.211033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.211404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.211445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.211842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.211884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.212289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.458 [2024-12-09 05:31:37.212330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.458 qpair failed and we were unable to recover it. 00:38:23.458 [2024-12-09 05:31:37.212756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.212796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.213196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.213237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.213617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.213659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.214038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.214081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.214507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.214782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.214838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.215209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.215251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.215608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.215649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.215892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.215938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.216325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.216366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.216507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.216553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.216950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.216992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.217357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.217397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.217853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.217897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.218338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.218382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.218727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.219413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.219535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.220058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.220941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.220987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.221245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.221304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.221745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.221788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.222202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.222320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.222835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.222891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.223170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.223214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.223619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.223661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.223941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.224009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.224399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.224807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.224861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.225183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.225550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.225594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.225956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.226406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.226448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.226841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.226885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.227305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.227689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.227730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.459 qpair failed and we were unable to recover it. 00:38:23.459 [2024-12-09 05:31:37.228132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.459 [2024-12-09 05:31:37.228176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.228602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.228853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.228902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.229307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.229348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.229716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.229757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.230147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.230191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.230573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.230567] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:23.460 [2024-12-09 05:31:37.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.230664] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.460 [2024-12-09 05:31:37.231041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.231083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.231486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.231528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.231915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.231962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.232420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.232465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.232846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.232890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.233166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.233553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.233596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.233982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.234026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.234408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.234849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.234892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.235145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.235188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.235595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.235639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.236026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.236071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.236482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.236525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.236905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.236950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.237350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.237393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.237660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.237709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.238066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.238110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.238398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.238445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.238847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.238893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.239314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.239357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.239749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.239791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.240132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.240176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.240568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.240612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.241047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.241421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.241463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.241796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.241847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.242247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.242538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.460 [2024-12-09 05:31:37.242579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.460 qpair failed and we were unable to recover it. 00:38:23.460 [2024-12-09 05:31:37.242961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.243004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.243356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.243399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.243733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.243776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.244161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.244595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.244637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.244897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.244940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.245350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.245392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.245659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.246028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.246071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.246339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.246382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.246784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.246836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.247212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.247254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.247458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.247500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.247877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.247920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.248295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.248338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.248719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.248759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.249150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.249194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.249579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.249620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.250010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.250055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.250419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.250462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.251116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.251160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.251555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.251598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.251841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.251888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.252153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.252560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.252602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.252855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.252903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.253267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.253317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.253702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.253742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.254131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.254174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.254553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.254597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.254976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.255019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.255392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.255433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.255833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.255876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.256112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.256156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.256402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.256445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.256847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.461 [2024-12-09 05:31:37.256889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.461 qpair failed and we were unable to recover it. 00:38:23.461 [2024-12-09 05:31:37.257270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.257311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.257713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.257757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.258158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.258201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.258592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.258635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.259037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.259082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.259460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.259504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.259890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.259933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.260185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.260610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.260651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.261077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.261457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.261499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.261855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.261897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.262277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.262319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.262704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.262747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.263193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.263236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.263584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.263625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.263862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.263907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.264310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.264353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.264595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.264642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.264889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.264936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.265343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.265386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.265808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.266177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.266623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.266998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.267043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.267314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.267357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.267732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.267783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.268177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.268222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.268636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.269018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.269063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.269317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.269371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.269743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.269786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.270151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.270193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.270527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.270569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.271008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.271051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.271394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.271437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.462 qpair failed and we were unable to recover it. 00:38:23.462 [2024-12-09 05:31:37.271762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.462 [2024-12-09 05:31:37.271802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.272167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.272210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.272586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.272628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.272978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.273022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.273266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.273314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.273717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.273761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.274188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.274539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.274581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.274957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.275001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.275165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.275207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.275629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.275675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.276075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.276118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.276481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.276523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.276905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.276951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.277327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.277368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.277757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.277799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.278177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.278219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.278630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.278673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.278958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.279001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.279386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.279428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.279800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.279851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.280232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.280275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.280645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.280687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.281056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.281100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.281478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.281521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.281854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.281899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.282263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.282304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.282577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.282623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.282998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.283041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.283451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.283494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.283868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.283913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.284352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.284395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.463 [2024-12-09 05:31:37.284772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.463 [2024-12-09 05:31:37.284827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.463 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.285085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.285128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.285483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.285535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.285910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.285955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.286333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.286375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.286750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.287138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.287181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.287562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.288010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.288052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.288440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.288483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.288857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.288900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.289304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.289346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.289718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.290150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.290196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.290572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.290613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.290997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.291042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.291297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.291338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.291714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.291757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.292140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.292183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.292561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.292602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.293039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.293084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.293465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.293507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.293811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.293861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.294227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.294268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.294640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.294682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.295069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.295114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.295357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.295399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.295793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.295844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.296067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.296108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.296462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.296506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.296887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.296931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.297298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.297339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.297707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.298150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.298194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.298591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.298633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.299027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.299070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.299485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.299526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.464 qpair failed and we were unable to recover it. 00:38:23.464 [2024-12-09 05:31:37.299916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.464 [2024-12-09 05:31:37.299961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.300343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.300385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.300772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.301160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.301203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.301587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.301630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.301894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.301944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.302324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.302367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.302771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.302813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.303231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.303635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.303678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.304077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.304120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.304526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.304574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.304944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.304988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.305357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.305398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.305752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.305794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.306164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.306206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.306581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.306623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.306964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.307387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.307799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.307853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.308135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.308177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.308450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.308491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.308870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.308913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.309189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.309234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.309597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.309639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.310026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.310068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.310438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.310482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.310845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.310887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.311311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.311353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.311721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.311762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.312144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.312186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.312588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.312958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.313003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.313369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.313414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.313765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.313807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.314154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.314583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.314624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.314911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.465 [2024-12-09 05:31:37.314953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.465 qpair failed and we were unable to recover it. 00:38:23.465 [2024-12-09 05:31:37.315234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.315276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.315487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.315527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.315938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.315980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.316326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.316368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.316751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.316791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.317077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.317123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.317498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.317541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.317773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.317831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.318203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.318244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.318636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.318678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.318856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.319304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.319345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.319788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.319836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.320229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.320270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.320643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.320684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.321096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.321139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.321556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.321596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.321847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.321890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.322267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.322308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.322681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.322721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.323155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.323198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.323553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.323593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.323811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.323863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.324302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.324343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.324707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.324748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.325123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.325171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.325551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.325593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.326017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.326058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.326407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.326447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.326802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.326852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.327187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.327228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.327475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.327517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.327872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.328274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.328693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.328736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.329121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.329166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.329532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.329574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.466 [2024-12-09 05:31:37.329854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.466 [2024-12-09 05:31:37.329897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.466 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.330152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.330194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.330518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.330558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.330922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.330964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.331391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.331432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.331804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.331856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.332261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.332673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.332713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.333116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.333158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.333523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.333564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.333980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.334029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.334417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.334457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.334809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.334862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.335251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.335294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.335664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.335704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.336101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.336145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.336550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.336936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.336980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.337344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.337384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.337797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.338130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.338176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.338590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.338633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.339020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.339064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.339428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.339468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.339810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.339861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.340228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.340270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.340554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.340594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.340982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.341242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.341283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.341649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.341691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.342051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.342093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.342467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.342507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.342860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.343296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.343668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.343708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.343987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.344378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.344420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.344806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.467 [2024-12-09 05:31:37.344887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.467 qpair failed and we were unable to recover it. 00:38:23.467 [2024-12-09 05:31:37.345265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.345307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.345678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.345718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.346111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.346155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.346530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.346572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.346935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.346976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.347367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.347706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.347748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.348011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.348054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.348427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.348468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.348858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.348903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.349270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.349311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.349685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.349727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.350089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.350139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.350508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.350548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.350991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.351033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.351403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.351445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.351810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.351863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.352226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.352267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.352643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.352683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.353062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.353105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.353381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.353423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.353790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.353851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.354226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.354267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.354479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.354520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.354941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.354983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.355352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.355392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.355807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.355858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.356203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.356245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.356613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.356653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.356996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.357037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.357379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.357420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.357812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.357864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.358272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.358313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.468 [2024-12-09 05:31:37.358567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.468 [2024-12-09 05:31:37.358612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.468 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.358858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.358902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.359288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.359331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.359729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.360070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.360112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.360494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.360535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.360848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.360893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.361357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.361397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.361805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.361858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.362313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.362354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.362723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.362765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.363153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.363196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.363443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.363846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.363889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.364267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.364671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.364711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.365053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.365096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.365370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.365415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.365812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.365866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.366234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.366282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.366641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.366682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.367044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.367086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.367472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.367514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.367872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.367915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.368275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.368316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.368682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.368723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.369103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.369146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.369525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.369565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.369947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.369990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.370343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.370385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.370813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.371224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.371577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.371617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.371904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.371947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.372320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.372595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.372640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.372948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.372992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.373433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.469 [2024-12-09 05:31:37.373473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.469 qpair failed and we were unable to recover it. 00:38:23.469 [2024-12-09 05:31:37.373851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.373894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.374271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.374311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.374691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.374731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.375014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.375057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.375452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.375830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.375873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.376269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.376312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.376705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.376744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.377039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.377088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.377461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.377503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.377867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.377910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.378282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.378323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.378693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.378735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.379150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.379195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.379561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.379601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.379970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.380013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.380391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.380433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.380800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.380849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.381120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.381340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.381380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.381789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.382096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.382145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.382483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.382522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.382955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.382998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.383252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.383294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.383649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.383689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.384101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.384142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.384484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.384525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.384896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.384939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.385205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.385245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.385647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.385687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.386057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.386099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.386485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.386526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.386895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.386938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.387328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.387583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.387623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.387989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.388032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.470 qpair failed and we were unable to recover it. 00:38:23.470 [2024-12-09 05:31:37.388404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.470 [2024-12-09 05:31:37.388445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.388903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.389306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.389346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.389694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.389736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.390081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.390122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 [2024-12-09 05:31:37.390138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.390483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.390524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.390884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.390927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.391308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.391350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.391781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.391831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.392083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.392123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.392479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.392519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.392902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.392945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.393316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.393356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.393709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.393749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.394259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.394302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.394681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.394722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.394963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.395005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.395382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.395423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.395692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.395733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.396116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.396158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.396557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.396598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.396968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.397009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.397414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.397454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.397883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.397927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.398316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.398362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.398745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.398786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.399082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.399129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.399546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.399588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.399966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.400010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.400392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.400433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.400809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.400862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.401259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.401303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.401630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.401670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.401997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.402039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.471 [2024-12-09 05:31:37.402407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.471 [2024-12-09 05:31:37.402449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.471 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.402848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.402891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.403270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.403310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.403683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.403724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.404075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.404118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.404360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.404402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.404801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.404852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.405103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.405144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.405537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.405577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.405962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.406005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.406397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.406437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.406787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.406850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.407193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.407234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.407488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.407531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.407838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.407879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.408298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.408663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.408704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.409087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.409132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.409539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.409775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.410162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.410581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.410622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.410995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.411038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.411396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.411437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.411825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.411868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.412256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.412298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.412647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.412688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.413075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.413118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.413375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.413415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.413804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.413855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.414222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.414274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.472 qpair failed and we were unable to recover it. 00:38:23.472 [2024-12-09 05:31:37.414661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.472 [2024-12-09 05:31:37.414701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.415048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.415090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.415462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.415504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.415912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.415955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.416318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.416358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.416604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.416645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.416931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.416974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.417355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.417395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.417765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.417806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.418085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.418126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.418503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.418544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.418906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.418948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.419334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.419375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.419832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.419875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.420254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.420296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.420663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.420702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.421079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.421122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.421472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.421513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.421810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.421861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.422293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.422664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.422704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.423080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.423121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.423407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.423454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.423826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.423869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.424136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.424176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.424539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.424580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.424965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.425010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.425273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.425318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.425786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.425835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.426060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.426368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.426413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.426667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.427080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.427123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.427516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.427556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.427935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.427978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.428342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.428383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.473 qpair failed and we were unable to recover it. 00:38:23.473 [2024-12-09 05:31:37.428753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.473 [2024-12-09 05:31:37.428794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.429161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.429202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.429654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.430083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.430137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.430507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.430548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.430971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.431346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.431388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.431745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.431786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.432194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.432239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.432610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.432650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.433021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.433065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.433426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.433467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.433832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.433874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.434259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.434300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.434710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.434752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.435138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.435181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.435546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.435586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.474 [2024-12-09 05:31:37.435845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.474 [2024-12-09 05:31:37.435888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.474 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.436529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.436573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.436884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.436926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.437349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.437732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.437774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.438093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.438135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.438507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.438549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.438922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.438965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.439343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.439384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.439779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.440209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.440250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.440609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.440649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.441036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.441079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.441438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.441478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.441842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.441884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.442264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.442304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.442550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.442596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.442994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.443398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.443439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.443798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.443847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.444234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.444276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.444586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.444626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.444971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.445012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.445378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.445418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.445787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.445865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.446217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.446265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.446644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.446684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.447113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.447156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.447539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.447581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.447956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.447997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.448370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.448409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.448849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.449206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.449248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.748 [2024-12-09 05:31:37.449610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.748 [2024-12-09 05:31:37.449650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.748 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.450020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.450063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.450428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.450845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.450888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.451274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.451313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.451676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.451716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.452095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.452138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.452536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.452578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.452959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.453000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.453354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.453395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.453760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.453801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.454233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.454276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.454622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.454661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.455033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.455076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.455407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.455448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.455836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.455879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.456286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.456566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.456607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.456971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.457013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.457428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.457804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.457856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.458231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.458272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.458643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.458683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.459031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.459074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.459434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.459474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.459835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.459877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.460252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.460292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.460661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.460702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.460975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.461356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.461396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.461754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.461794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.462183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.462225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.462606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.462652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.463012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.463055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.463331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.463375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.463759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.463801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.464181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.464223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.464599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.464640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.749 qpair failed and we were unable to recover it. 00:38:23.749 [2024-12-09 05:31:37.465013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.749 [2024-12-09 05:31:37.465056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.465313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.465359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.465606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.465996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.466038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.466407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.466447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.466874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.467251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.467292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.467663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.467704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.468088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.468523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.468565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.468997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.469040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.469416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.469457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.469849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.470256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.470299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.470548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.470590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.471006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.471248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.471289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.471750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.471792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.472189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.472654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.473115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.473160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.473320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.750 [2024-12-09 05:31:37.473357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.750 [2024-12-09 05:31:37.473366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.750 [2024-12-09 05:31:37.473376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.750 [2024-12-09 05:31:37.473383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.750 [2024-12-09 05:31:37.473592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.473958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.474000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.474366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.474408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.474779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.474827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.475192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.475234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.475516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:23.750 [2024-12-09 05:31:37.475572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.475617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.475682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:23.750 [2024-12-09 05:31:37.475785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:23.750 [2024-12-09 05:31:37.475813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:23.750 [2024-12-09 05:31:37.475977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.476019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.476403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.476444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.476804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.476857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.477228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.477269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.477516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.477557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.477960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.478002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.478289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.478335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.478582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.750 [2024-12-09 05:31:37.478623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.750 qpair failed and we were unable to recover it. 00:38:23.750 [2024-12-09 05:31:37.478892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.478934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.479203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.479244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.479652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.479693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.479903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.479948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.480306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.480348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.480590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.480631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.480997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.481041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.481321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.481363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.481633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.481674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.481925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.481968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.482245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.482287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.482650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.482935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.482977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.483329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.483369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.483726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.483767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.484058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.484101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.484450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.484490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.484870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.484914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.485183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.485224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.485603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.485643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.485993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.486035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.486415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.486454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.486745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.486786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.487192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.487234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.487600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.487640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.488025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.488067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.488319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.488361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.488613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.488653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.488926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.488969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.489231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.489272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.489645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.489686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.490069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.490111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.490352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.490392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.490769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.490809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.491152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.491194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.491589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.491630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.491887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.491934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.492315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.751 [2024-12-09 05:31:37.492358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.751 qpair failed and we were unable to recover it. 00:38:23.751 [2024-12-09 05:31:37.492623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.492670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.492903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.492945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.493285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.493325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.493691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.493731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.494107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.494149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.494525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.494566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.494910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.494953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.495329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.495370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.495743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.495785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.496057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.496099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.496342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.496383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.496805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.496871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.497093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.497402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.497680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.497720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.497988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.498030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.498410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.498709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.498748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.499120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.499162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.499274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.499313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.499697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.499739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.500103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.500366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.500405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.500764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.500804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.501065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.501107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.501451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.501492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.501871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.501913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.502278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.502318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.502689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.502730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.503115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.503157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.503525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.503565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.752 qpair failed and we were unable to recover it. 00:38:23.752 [2024-12-09 05:31:37.503925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.752 [2024-12-09 05:31:37.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.504233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.504274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.504673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.505117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.505159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.505541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.505581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.505998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.506041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.506289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.506329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.506700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.506746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.507133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.507175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.507548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.507590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.507929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.507972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.508226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.508271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.508523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.508564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.508942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.508986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.509353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.509393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.509743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.509783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.510159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.510200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.510597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.510639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.511023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.511488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.511849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.511891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.512325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.512623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.512667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.513038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.513082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.513371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.513411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.513793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.513843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.514225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.514266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.514642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.514682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.514946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.514988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.515403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.515446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.515722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.515762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.516047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.516089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.516469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.516509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.516897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.516940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.517326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.517368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.517735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.517776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.518185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.518227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.518483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.753 [2024-12-09 05:31:37.518526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.753 qpair failed and we were unable to recover it. 00:38:23.753 [2024-12-09 05:31:37.518888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.518931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.519294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.519335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.519723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.519764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.520199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.520620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.520660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.521010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.521053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.521195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.521245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.521520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.521563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.521926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.521969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.522347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.522400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.522783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.523230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.523273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.523626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.523667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.524045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.524088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.524225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.524266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.524699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.525108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.525408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.525448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.525884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.526272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.526315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.526467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.526506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.526872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.526914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.527162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.527203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.527612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.527654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.528102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.528145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.528520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.528561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.528938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.528981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.529238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.529281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.529528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.529947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.529988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.530213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.530253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.530620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.530662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.531057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.531100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.531491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.531530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.531768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.531808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.532097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.532143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.532511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.532552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.532806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.754 [2024-12-09 05:31:37.532861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.754 qpair failed and we were unable to recover it. 00:38:23.754 [2024-12-09 05:31:37.533216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.533257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.533634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.533675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.534064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.534106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.534463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.534503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.534874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.534915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.535145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.535186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.535574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.536009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.536054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.536431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.536471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.536834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.536877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.537123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.537163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.537559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.537606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.537848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.537891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.538212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.538253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.538636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.539046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.539088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.539482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.539523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.539899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.539943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.540310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.540351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.540722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.540762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.541140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.541182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.541554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.541597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.542290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.542331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.542573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.542614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.542861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.542905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.543247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.543287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.543499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.543539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.543785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.543835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.544067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.544111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.544258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.544296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.544638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.544679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.544930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.544973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.545236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.545489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.545528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.545918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.545960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.546320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.546361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.546745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.755 [2024-12-09 05:31:37.546786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.755 qpair failed and we were unable to recover it. 00:38:23.755 [2024-12-09 05:31:37.547172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.547213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.547596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.547636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.547925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.547966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.548328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.548370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.548740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.548780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.549173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.549215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.549577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.549617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.549870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.549913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.550290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.550329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.550570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.550990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.551030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.551255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.551296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.551659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.551698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.552093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.552140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.552392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.552437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.552804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.552857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.553282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.553326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.553693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.553733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.554073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.554116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.554349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.554389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.554759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.554799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.555236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.555277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.555538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.555578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.555834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.555877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.556154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.556314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.556354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.556740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.556779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.557295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.557337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.557591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.557631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.557983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.558024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.558401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.558441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.558726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.558767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.559115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.559156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.559581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.559621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.559991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.560034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.560407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.560447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.560797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.560857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.561234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.756 [2024-12-09 05:31:37.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.756 qpair failed and we were unable to recover it. 00:38:23.756 [2024-12-09 05:31:37.561668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.561708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.562167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.562210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.562479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.562520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.562778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.562826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.563195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.563237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.563616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.563657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.564022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.564063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.564436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.564856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.564898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.565153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.565198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.565554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.565595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.565861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.565905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.566289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.566329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.566609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.566651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.567076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.567118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.567375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.567422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.567806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.567858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.568238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.568279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.568516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.568556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.568926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.568968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.569393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.569647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.569688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.570041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.570083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.570323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.570363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.570750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.570789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.571082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.571124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.571274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.571314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.571719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.571760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.572127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.572168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.572564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.572605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.572855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.572900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.573306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.573346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.573578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.757 [2024-12-09 05:31:37.573617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.757 qpair failed and we were unable to recover it. 00:38:23.757 [2024-12-09 05:31:37.573869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.573912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.574268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.574309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.574674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.574715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.575056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.575097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.575473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.575515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.575872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.575913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.576286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.576326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.576698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.576738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.577009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.577052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.577178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.577218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.577606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.577648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.578092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.578134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.578573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.578932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.578975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.579347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.579387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.579857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.580221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.580263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.580644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.580683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.581063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.581105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.581354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.581395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.581761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.581802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.582202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.582243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.582611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.582657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.583000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.583042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.583423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.583464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.583696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.583736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.584107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.584148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.584446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.584486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.584730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.584771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.585155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.585196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.585571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.585612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.585960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.586002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.586123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.586162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.586424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.586464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.586842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.586884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.587185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.587225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.587610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.588020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.758 [2024-12-09 05:31:37.588061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.758 qpair failed and we were unable to recover it. 00:38:23.758 [2024-12-09 05:31:37.588284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.588323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.588723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.588763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.589110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.589153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.589530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.589570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.589942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.589983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.590334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.590374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.590759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.590800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.591188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.591229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.591615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.591655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.591963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.592006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.592376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.592418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.592798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.592858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.593230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.593661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.594038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.594082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.594459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.594498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.594862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.595283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.595324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.595611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.595653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.596021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.596064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.596429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.596469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.596708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.596748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.597139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.597182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.597492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.597532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.597929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.597977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.598223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.598264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.598663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.598704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.598923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.598964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.599355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.599780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.599830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.600133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.600180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.600565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.600606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.600970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.601013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.601382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.601808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.601859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.602238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.602281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.602710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.602752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.759 [2024-12-09 05:31:37.603024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.759 [2024-12-09 05:31:37.603069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.759 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.603474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.603516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.603781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.603830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.604115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.604155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.604535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.604576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.604958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.605003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.605380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.605422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.605759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.605800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.606057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.606098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.606519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.606973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.607413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.607453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.607835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.607877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.608122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.608163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.608549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.608591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.608858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.608900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.609213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.609254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.609658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.609701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.610071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.610114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.610510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.610552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.610941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.610985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.611304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.611710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.611751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.612053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.612096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.612354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.612395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.612807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.612862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.613234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.613277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.613654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.613701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.614101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.614144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.614566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.614609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.614851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.614893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.615122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.615163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.615420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.615462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.615597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.615640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.615976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.616018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.616375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.616684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.616724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.617145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.617187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.760 qpair failed and we were unable to recover it. 00:38:23.760 [2024-12-09 05:31:37.617569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.760 [2024-12-09 05:31:37.617610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.617996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.618048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.618178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.618224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.618642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.618685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.619066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.619108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.619317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.619588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.619629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.619735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.619774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.620068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.620110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.620421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.620463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.620860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.621670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.622106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.622482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.622523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.622873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.622917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.623289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.623332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.623711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.623751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.624138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.624181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.624420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.624460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.624696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.624736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.625140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.625182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.625558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.625600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.625976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.626018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.626273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.626541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.626581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.626951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.626994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.627432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.627473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.627872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.627915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.628362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.628621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.628662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.629049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.629091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.629481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.629523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.761 [2024-12-09 05:31:37.629920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.761 [2024-12-09 05:31:37.629963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.761 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.630360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.630783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.630833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.631085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.631125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.631396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.631441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.631856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.631900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.632299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.632722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.632763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.633154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.633198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.633599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.633640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.634093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.634137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.634516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.634557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.634868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.635226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.635269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.635643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.635685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.635925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.635966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.636339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.636380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.636605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.636646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.636875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.636917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.637326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.637369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.637748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.637789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.638183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.638225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.638602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.638642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.638981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.639027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.639427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.639467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.639813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.639884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.640299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.640341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.640717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.640757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.641137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.641180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.641556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.641597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.641883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.641926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.642304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.642345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.642722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.642763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.643187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.643229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.643507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.643549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.643891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.644147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.762 [2024-12-09 05:31:37.644192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.762 qpair failed and we were unable to recover it. 00:38:23.762 [2024-12-09 05:31:37.644544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.644584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.644850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.644891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.645147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.645187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.645573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.645614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.645983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.646026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.646285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.646327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.646683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.646723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.647180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.647222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.647442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.647482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.647837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.648329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.648585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.648627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.648946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.648989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.649366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.649407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.649749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.649790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.650167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.650584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.650625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.651001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.651043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.651399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.651441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.651693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.651733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.651990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.652033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.652390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.652431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.652806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.652857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.653233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.653274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.653620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.653661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.653859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.653899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.654295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.654337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.654694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.654736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.655122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.655541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.655582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.655950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.655995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.656362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.656403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.656788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.656848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.657134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.657174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.657536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.657577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.657775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.657823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.658203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.763 [2024-12-09 05:31:37.658608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.763 [2024-12-09 05:31:37.658649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.763 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.658872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.658914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.659312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.659360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.659720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.659761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.659989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.660031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.660323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.660363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.660720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.660761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.661106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.661149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.661519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.661559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.661944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.661988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.662341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.662383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.662735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.662776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.663169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.663211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.663630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.663671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.664065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.664107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.664455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.664903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.664945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.665324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.665364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.665742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.665782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.665948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.666334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.666375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.666752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.666794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.667026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.667076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.667429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.667470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.667696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.667739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.668087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.668130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.668509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.668550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.668926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.668969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.669389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.669808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.669868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.670243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.670284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.670661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.670702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.671087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.671131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.671261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.671301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.671652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.671694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.672053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.672094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.672416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.672457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.672715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.672755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.764 [2024-12-09 05:31:37.673141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.764 [2024-12-09 05:31:37.673184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.764 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.673535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.673575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.673971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.674014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.674387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.674428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.674828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.674877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.675197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.675238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.675629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.675669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.675960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.676007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.676360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.676400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.676610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.676650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.676923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.676970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.677332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.677373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.677749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.677790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.678175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.678218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.678473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.678512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.678907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.679270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.679310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.679528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.679859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.679902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.680275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.680315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.680741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.681147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.681189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.681462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.681508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.681907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.681949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.682153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.682416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.682457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.682844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.682889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.683247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.683288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.683670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.683712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.683930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.684326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.684368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.684613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.684659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.684946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.684990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.685404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.685445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.685702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.685744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.686137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.686181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.686541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.686582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.686811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.686864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.765 qpair failed and we were unable to recover it. 00:38:23.765 [2024-12-09 05:31:37.687256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.765 [2024-12-09 05:31:37.687297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.687691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.687732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.688095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.688138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.688352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.688391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.688764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.688804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.689197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.689238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.689498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.689539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.689957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.690000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.690350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.690391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.690804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.691190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.691232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.691612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.691654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.691986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.692029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.692386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.692812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.692861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.693263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.693304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.693520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.693559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.693993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.694035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.694422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.694463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.694828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.694871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.695133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.695173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.695549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.695590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.695937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.695980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.696378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.696420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.696851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.697230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.697271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.697651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.697691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.698105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.698148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.698361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.698400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.698784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.698836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.699210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.699618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.699658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.700101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.700145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.766 [2024-12-09 05:31:37.700519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.766 [2024-12-09 05:31:37.700567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.766 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.700866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.701286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.701327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.701592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.701634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.701885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.701926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.702288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.702329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.702685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.702726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.702985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.703026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.703238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.703279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.703685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.703725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.704153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.704196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.704433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.704477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.704877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.704923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.705321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.705362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.705609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.705652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.706030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.706073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.706317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.706358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.706577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.706616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.706992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.707034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.707436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.707478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.707863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.707906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.708280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.708323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.708710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.708752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.709131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.709174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.709543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.709584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.709969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.710012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.710250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.710290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.710545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.710586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.710872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.710914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.711149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.711188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.711458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.711503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.711765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.711807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.712208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.712250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.712595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.712635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.713025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.713070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.713441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.713481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.713778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.714160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.714205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.767 qpair failed and we were unable to recover it. 00:38:23.767 [2024-12-09 05:31:37.714592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.767 [2024-12-09 05:31:37.714633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.714910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.714954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.715341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.715396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.715770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.715811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.716048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.716090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.716326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.716371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.716773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.716826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.717192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.717234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.717485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.717527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.717891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.717934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.718191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.718234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.718476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.718516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.718902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.719261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.719701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.719743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.720138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.720181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.720457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.720499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.720834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.720876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.721129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.721170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.721578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.721618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.721998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.722042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.722400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.722441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.722796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.723180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.723221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.723463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.723507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.723858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.723901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.724284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.724326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.724689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.724730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.725106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.725150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.725535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.725576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.725837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.725879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.726173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.726564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.726604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.726958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.727001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.727393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.727741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.727780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.728165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.728207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.728554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.728595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.768 qpair failed and we were unable to recover it. 00:38:23.768 [2024-12-09 05:31:37.728838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.768 [2024-12-09 05:31:37.728879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.769 qpair failed and we were unable to recover it. 00:38:23.769 [2024-12-09 05:31:37.729228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.769 [2024-12-09 05:31:37.729269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.769 qpair failed and we were unable to recover it. 00:38:23.769 [2024-12-09 05:31:37.729539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.769 [2024-12-09 05:31:37.729582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.769 qpair failed and we were unable to recover it. 00:38:23.769 [2024-12-09 05:31:37.729956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.769 [2024-12-09 05:31:37.729998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:23.769 qpair failed and we were unable to recover it. 00:38:24.042 [2024-12-09 05:31:37.730348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.042 [2024-12-09 05:31:37.730396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.042 qpair failed and we were unable to recover it. 00:38:24.042 [2024-12-09 05:31:37.730782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.042 [2024-12-09 05:31:37.730845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.042 qpair failed and we were unable to recover it. 00:38:24.042 [2024-12-09 05:31:37.731112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.042 [2024-12-09 05:31:37.731152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.042 qpair failed and we were unable to recover it. 00:38:24.042 [2024-12-09 05:31:37.731555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.731596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.731959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.732002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.732384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.732426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.732664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.732705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.733068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.733110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.733484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.733525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.733910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.733952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.734304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.734344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.734760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.735192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.735564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.735605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.735964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.736007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.736330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.736372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.736749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.736789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.737171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.737213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.737586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.737627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.737852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.737893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.738264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.738305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.738680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.738721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.739121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.739163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.739495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.739536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.739914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.739957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.740340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.740381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.740635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.740676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.741111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.741470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.741899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.741941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.742300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.742341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.742557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.742597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.742832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.742874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.743126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.743166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.743275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.743314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.743727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.743769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.744270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.744318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.744702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.744744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.745093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.745135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.745492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.043 [2024-12-09 05:31:37.745533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.043 qpair failed and we were unable to recover it. 00:38:24.043 [2024-12-09 05:31:37.745892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.745947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.746309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.746349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.746697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.746738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.747118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.747161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.747537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.747577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.747937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.747980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.748361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.748401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.748574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.748615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.748995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.749039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.749418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.749459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.749839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.749882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.750118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.750159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.750443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.750484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.750859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.750901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.751152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.751193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.751432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.751473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.751843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.751885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.752267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.752308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.752682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.752723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.753003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.753049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.753458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.753500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.753887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.753930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.754323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.754364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.754739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.754780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.755044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.755085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.755464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.755504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.755791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.756104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.756146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.756389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.756433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.756728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.756771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.757026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.757069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.757300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.757341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.757552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.757592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.757959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.758002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.758352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.758395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.758736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.758776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.759160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.759205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.044 [2024-12-09 05:31:37.759594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.044 [2024-12-09 05:31:37.759636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.044 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.759908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.759952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.760343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.760384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.760622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.760668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.760940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.760986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.761371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.761413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.761840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.761883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.762103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.762144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.762483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.762525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.762913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.762955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.763318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.763360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.763644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.763692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.764061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.764104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.764468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.764514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.764627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.765070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.765297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.765337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.765693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.765734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.765936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.765977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.766360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.766401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.766751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.766792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.767034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.767074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.767447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.767487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.767868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.767911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.768353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.768394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.768643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.768684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.768952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.768994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.769393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.769433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.769675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.770116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.770376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.770416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.770781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.770844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.771130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.771176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.771561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.771602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.771968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.772010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.772270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.772312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.772631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.773047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.045 [2024-12-09 05:31:37.773403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.045 [2024-12-09 05:31:37.773443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.045 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.773690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.773730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.773982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.774024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.774356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.774397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.774778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.774837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.775216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.775683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.776056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.776098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.776499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.776875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.776918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.777278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.777318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.777530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.777939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.777981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.778201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.778241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.778478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.778518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.778621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.778661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.778931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.778973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.779332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.779373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.779751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.779794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.780174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.780217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.780468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.780509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.780877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.780920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.781169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.781209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.781613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.781978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.782021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.782153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.782196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.782551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.782592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.782807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.782856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.783115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.783161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.783387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.783427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.783760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.783800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.784185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.784489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.784533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.784812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.784863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.785129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.785171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.785568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.785609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.786038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.046 [2024-12-09 05:31:37.786298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.046 [2024-12-09 05:31:37.786338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.046 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.786729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.786771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.787014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.787058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.787420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.787462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.787839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.787880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.788292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.788332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.788568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.788608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.789003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.789044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.789278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.789329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.789723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.789764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.790146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.790189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.790572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.790613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.790868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.790910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.791265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.791306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.791691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.791732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.792091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.792134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.792509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.792551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.792790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.792840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.793185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.793227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.793598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.793638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.793914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.793957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.794352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.794394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.794774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.794825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.795201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.795243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.795492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.795533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.795774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.795825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.796049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.796090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.796378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.796421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.796664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.796704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.797095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.797138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.797258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.797298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.797663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.797704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.798105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.798147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.047 qpair failed and we were unable to recover it. 00:38:24.047 [2024-12-09 05:31:37.798383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.047 [2024-12-09 05:31:37.798423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.798797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.798847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.799203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.799244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.799624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.799665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.800030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.800071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.800450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.800491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.800706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.800745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.801004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.801046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.801370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.801411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.801805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.801858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.802214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.802255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.802627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.802668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.802805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.802862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.803275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.803318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.803656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.803949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.803999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.804376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.804418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.804792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.804857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.805265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.805640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.805681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.805920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.805962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.806323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.806364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.806753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.806794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.807190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.807590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.807632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.807904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.807946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.808206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.808246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.808586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.808628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.809055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.809097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.809461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.809502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.809835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.809878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.810301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.810342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.810599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.810641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.810880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.810925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.811294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.811335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.811683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.811725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.811969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.812011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.812406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.048 [2024-12-09 05:31:37.812446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.048 qpair failed and we were unable to recover it. 00:38:24.048 [2024-12-09 05:31:37.812793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.812853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.813222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.813497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.813537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.813887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.813929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.814331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.814372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.814705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.814746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.815126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.815168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.815541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.815996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.816242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.816290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.816692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.816734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.817153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.817196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.817539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.817579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.817949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.817992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.818159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.818204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.818576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.818618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.818993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.819035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.819452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.819503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.819752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.819795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.820146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.820189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.820454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.820495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.820747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.820787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.821166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.821208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.821604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.821645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.822028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.822071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.822339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.822380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.822639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.822680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.823064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.823107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.823454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.823495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.823849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.824272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.824312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.824696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.824737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.825145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.825186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.825582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.825623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.825869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.825911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.826292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.826333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.826588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.826630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.049 [2024-12-09 05:31:37.826855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.049 [2024-12-09 05:31:37.826896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.049 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.827147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.827188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.827467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.827509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.827774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.827824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.828257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.828298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.828675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.828716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.829020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.829063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.829465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.829902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.830255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.830297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.830430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.830475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.830872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.830915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.831282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.831322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.831698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.831740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.832135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.832181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.832424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.832467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.832830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.832872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.833124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.833165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.833534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.833575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.833786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.833836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.834188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.834238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.834451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.834492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.834842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.834886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.835296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.835336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.835683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.835723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.836088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.836132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.836515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.836556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.836905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.836947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.837314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.837355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.837701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.837742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.837995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.838037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.838422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.838464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.838830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.838872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.839260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.839301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.839693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.839734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.840124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.840166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.840513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.840555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.840940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.840983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.050 [2024-12-09 05:31:37.841360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.050 [2024-12-09 05:31:37.841401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.050 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.841772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.841813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.842210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.842252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.842627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.842668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.842997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.843039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.843459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.843500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.843882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.843927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.844156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.844195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.844443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.844483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.844748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.844792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.845172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.845215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.845649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.845993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.846033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.846202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.846241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.846597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.846636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.846845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.846889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.847142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.847182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.847561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.847600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.847857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.847918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.848322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.848362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.848705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.848743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.849001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.849043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.849370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.849418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.849843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.850234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.850531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.850572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.850945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.850989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.851364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.851406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.851662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.851701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.851920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.851963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.852349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.852761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.852804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.853189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.853232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.853485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.853526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.853752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.853793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.854179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.854222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.854465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.854507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.854894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.854936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.051 [2024-12-09 05:31:37.855329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.051 [2024-12-09 05:31:37.855370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.051 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.855591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.855632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.855991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.856034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.856402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.856444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.856849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.856893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.857334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.857375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.857630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.857670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.858043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.858086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.858345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.858386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.858765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.858807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.859205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.859249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.859529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.859575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.859849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.859893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.860139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.860181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.860410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.860834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.860876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.861217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.861259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.861589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.861631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.861883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.861926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.862176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.862217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.862652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.862693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.863075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.863117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.863478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.863523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.863919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.863962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.864346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.864393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.864784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.864834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.864954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.864995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.865256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.865297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.865665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.865707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.865839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.865881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.866131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.866172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.866516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.866557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.866948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.866992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.867388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.867428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.052 qpair failed and we were unable to recover it. 00:38:24.052 [2024-12-09 05:31:37.867673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.052 [2024-12-09 05:31:37.867714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.868101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.868143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.868519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.868559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.868809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.868864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.869305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.869346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.869627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.869668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.870117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.870159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.870427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.870470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.870835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.871130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.871172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.871519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.871560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.871795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.871867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.872139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.872180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.872415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.872457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.872712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.872753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.873053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.873294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.873336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.873737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.873785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.874166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.874584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.874626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.874988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.875032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.875414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.875455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.875799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.875866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.876246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.876288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.876689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.877059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.877103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.877487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.877527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.877648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.877685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.878036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.878079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.878435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.878476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.878826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.878870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.879255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.879296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.879695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.880029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.880072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.880307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.880348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.880571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.880926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.880969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.881349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.881390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.053 qpair failed and we were unable to recover it. 00:38:24.053 [2024-12-09 05:31:37.881786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.053 [2024-12-09 05:31:37.881836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.881969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.882257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.882299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.882653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.882693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.882976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.883340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.883840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.883883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.884226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.884268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.884634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.884675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.885080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.885122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.885496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.885773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.885826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.886169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.886211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.886453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.886492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.886740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.886781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.887118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.887160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.887554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.887595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.887839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.888245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.888287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.888658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.888711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.889103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.889145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.889383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.889423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.889789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.889848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.890218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.890258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.890632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.890676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.891051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.891094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.891462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.891503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.891615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.892020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.892062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.892438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.892478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.892731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.892770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.893060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.893102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.893461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.893503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.893903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.893946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.894310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.894351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.894464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.894503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.894944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.895214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.895253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.054 qpair failed and we were unable to recover it. 00:38:24.054 [2024-12-09 05:31:37.895631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.054 [2024-12-09 05:31:37.895672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.896076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.896118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.896387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.896431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.896688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.896728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.897066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.897107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.897360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.897402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.897766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.897807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.898219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.898263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.898676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.898717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.899102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.899145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.899527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.899923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.899965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.900200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.900240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.900617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.900657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.901019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.901060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.901436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.901479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.901719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.901763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.902003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.902047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.902420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.902461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.902684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.902726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.903069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.903351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.903398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.903782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.904206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.904248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.904571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.904613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.905006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.905049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.905448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.905831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.905873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.906279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.906322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.906557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.906601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.906995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.907038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.907294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.907333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.907624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.907665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.907894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.907936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.908300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.908341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.908719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.908760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.909151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.909193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.909571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.055 [2024-12-09 05:31:37.909612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.055 qpair failed and we were unable to recover it. 00:38:24.055 [2024-12-09 05:31:37.909850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.909892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.910164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.910538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.910580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.910950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.910993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.911348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.911389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.911760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.911802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.912145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.912534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.912576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.912939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.912981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.913222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.913263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.913615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.913655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.913901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.913942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.914309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.914352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.914694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.914735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.915155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.915198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.915570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.915612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.915879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.915926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.916324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.916364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.916741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.916782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.917157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.917585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.917988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.918032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.918418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.918459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.918740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.918787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.919168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.919210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.919538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.919951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.919992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.920352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.920393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.920835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.921201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.921243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.921625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.921666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.922001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.922045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.922395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.922436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.922813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.922864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.056 [2024-12-09 05:31:37.923123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.056 [2024-12-09 05:31:37.923164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.056 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.923626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.923667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.923959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.924003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.924256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.924298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.924637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.924679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.924908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.924948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.925322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.925363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.925734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.925775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.926024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.926065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.926439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.926480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.926865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.926907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.927266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.927306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.927463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.927503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.927741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.927782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.928020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.928060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.928423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.928465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.928869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.928913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.929156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.929198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.929456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.929496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.929874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.929916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.930220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.930273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.930492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.930533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.930896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.930938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.931281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.931322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.931706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.931747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.932167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.932209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.932595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.932635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.933020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.933265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.933305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.933555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.933606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.933856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.933899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.934290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.934332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.934587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.934629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.934878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.935326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.935368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.935742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.935785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.936375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.936421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.936798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.936859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.057 [2024-12-09 05:31:37.937268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.057 [2024-12-09 05:31:37.937309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.057 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.937550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.937828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.937870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.938260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.938301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.938737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.938782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.939180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.939224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.939639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.939681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.939909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.939951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.940356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.940704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.940745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.941145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.941187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.941568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.941608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.941862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.941904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.942303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.942344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.942581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.942625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.943006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.943049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.943394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.943435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.943736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.943778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.944175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.944218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.944555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.944596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.944982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.945025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.945282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.945325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.945675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.945716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.946109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.946152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.946530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.946571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.946925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.946968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.947334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.947376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.947649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.947694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.947932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.947974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.948349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.948390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.948764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.948807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.949057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.949105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.949453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.949494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.949921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.949964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.950329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.950369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.950592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.950632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.950920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.950962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.951358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.058 [2024-12-09 05:31:37.951399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.058 qpair failed and we were unable to recover it. 00:38:24.058 [2024-12-09 05:31:37.951644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.951684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.951986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.952031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.952425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.952852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.952896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.953273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.953315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.953654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.953696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.954067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.954110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.954490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.954532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.954796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.954854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.954982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.955023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.955364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.955405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.955760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.955801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.956023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.956064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.956321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.956362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.956752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.957142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.957185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.957569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.957611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.957813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.957867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.958226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.958266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.958478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.958518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.958874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.958917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.959140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.959180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.959443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.959484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.959646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.959687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.960056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.960098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.960471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.960513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.960886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.960928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.961289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.961329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.961707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.961748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.962156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.962199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.962586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.962629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.962973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.963016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.963407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.963447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.963680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.963731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.963953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.963995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.964213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.964253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.964471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.964726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.964766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.059 qpair failed and we were unable to recover it. 00:38:24.059 [2024-12-09 05:31:37.965196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.059 [2024-12-09 05:31:37.965238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.965477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.965518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.965895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.965937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.966258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.966535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.966574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.966851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.966898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.967278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.967319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.967641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.967920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.967962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.968401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.968442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.968840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.969306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.969347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.969593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.969633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.970016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.970059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.970422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.970463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.970813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.970866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.971237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.971277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.971619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.971660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.972043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.972421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.972462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.972850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.972893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.973287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.973328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.973687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.973729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.974067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.974109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.974364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.974404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.974653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.974693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.975055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.975096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.975333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.975374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.975773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.975813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.976210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.976250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.976494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.976538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.976833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.977193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.977234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.977450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.977489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.977754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.977794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.978167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.978216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.978617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.978658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.979001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.060 [2024-12-09 05:31:37.979045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.060 qpair failed and we were unable to recover it. 00:38:24.060 [2024-12-09 05:31:37.979404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.979445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.979923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.980245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.980287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.980669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.980710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.980950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.980992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.981373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.981413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.981844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.982257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.982298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.982547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.982587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.982693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.982732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.983147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.983190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.983576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.983616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.983855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.983897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.984143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.984184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.984545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.984585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.984974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.985016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.985256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.985297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.985674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.985715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.985843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.985886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.986117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.986158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.986518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.986559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.986842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.986884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.987124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.987164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.987611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.987653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.988034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.988078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.988452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.988493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.988781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.989163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.989204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.989550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.989591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.989969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.990011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.990386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.990427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.990677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.990716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.990969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.991011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.061 qpair failed and we were unable to recover it. 00:38:24.061 [2024-12-09 05:31:37.991391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.061 [2024-12-09 05:31:37.991432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.991807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.991858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.992220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.992262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.992476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.992516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.992863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.992912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.993295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.993336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.993712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.993754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.994142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.994186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.994459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.994503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.994898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.994941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.995361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.995403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.995639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.995683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.996042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.996084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.996334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.996375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.996733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.062 [2024-12-09 05:31:37.996773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:24.062 [2024-12-09 05:31:37.997160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.997202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:24.062 [2024-12-09 05:31:37.997580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.997628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:24.062 05:31:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.062 [2024-12-09 05:31:37.998008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.998049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.998426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.998467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.998827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.998871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.999107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.999493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.999534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:37.999889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:37.999932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.000278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.000327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.000716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.000758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.001025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.001069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.001416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.001457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.001843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.001886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.002260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.002303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.002708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.002748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.003013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.003059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.003404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.003445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.003766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.003807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.004092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.004132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.004495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.004536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.062 [2024-12-09 05:31:38.004960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.062 qpair failed and we were unable to recover it. 00:38:24.062 [2024-12-09 05:31:38.005335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.005378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.005753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.005794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.006211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.006581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.006623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.007050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.007093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.007468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.007509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.007888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.007937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.008303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.008344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.008740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.008781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.009041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.009344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.009385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.009791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.009843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.010067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.010108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.010468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.010510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.010886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.010928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.011152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.011193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.011566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.011607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.011986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.012028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.012394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.012436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.012812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.012861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.013222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.013263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.013661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.013701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.013814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.013861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.014205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.014247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.014491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.014533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.014786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.014852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.015260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.015301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.015684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.015724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.015881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.015923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.016306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.016347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.016480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.016524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.016924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.017366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.017750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.017792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.018190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.018395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.018435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.018806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.019226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.063 [2024-12-09 05:31:38.019266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.063 qpair failed and we were unable to recover it. 00:38:24.063 [2024-12-09 05:31:38.019613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.019654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.019989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.020032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.020378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.020418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.020793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.020844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.021206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.021246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.021576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.021615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.021855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.021896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.022289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.022330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.022679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.022726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.023111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.023153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.023522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.023563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.023930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.024157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.024198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.064 [2024-12-09 05:31:38.024529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.064 [2024-12-09 05:31:38.024569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.064 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.024941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.025364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.025405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.025510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.025549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.025783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.025835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.026104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.026146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.026385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.026425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.026800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.026850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.027207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.027247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.027492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.027534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.027890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.027932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.028145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.028185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.028560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.028980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.029021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.029354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.029396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.029636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.029677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.029924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.331 [2024-12-09 05:31:38.029965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.331 qpair failed and we were unable to recover it. 00:38:24.331 [2024-12-09 05:31:38.030424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.030465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.030807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.030869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.031248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.031291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.031669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.031711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.032052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.032095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.032474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.032516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.032892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.033305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.033346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.033602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.033643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.034099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.034142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.034383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.034422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.034824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.034866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.035286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.035499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.035539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.035902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.035944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.036323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.036364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.332 [2024-12-09 05:31:38.036744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.036788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:24.332 [2024-12-09 05:31:38.037173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.037223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.332 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.332 [2024-12-09 05:31:38.037600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.037641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.037934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.037978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.038340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.038380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.038745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.038786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.039166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.039208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.039582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.039623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.039888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.039930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.040366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.040740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.040781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.041114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.041157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.041555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.041597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.041965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.042007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.042382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.042425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.042779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.042828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.043202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.043242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.043615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.043656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.044038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.044082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.044425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.332 [2024-12-09 05:31:38.044466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.332 qpair failed and we were unable to recover it. 00:38:24.332 [2024-12-09 05:31:38.044785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.044837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.045207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.045528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.045907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.045949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.046330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.046675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.046716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.047095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.047136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.047386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.047428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.047801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.047850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.048223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.048264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.048690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.048941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.048991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.049432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.049473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.049811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.049860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.050213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.050253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.050408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.050448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.050836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.050879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.051123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.051168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.051408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.051448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.051672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.051712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.052094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.052143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.052520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.052560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.052936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.052978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.053222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.053262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.053657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.053898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.053940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.054176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.054217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.054573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.054614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.054976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.055017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.055396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.055436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.055807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.055887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.056145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.056190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.056529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.056570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.056949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.056991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.057420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.057463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.057805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.057859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.058245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.058286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.333 [2024-12-09 05:31:38.058742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.333 qpair failed and we were unable to recover it. 00:38:24.333 [2024-12-09 05:31:38.058989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.059032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.059383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.059424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.059762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.059803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.060071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.060111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.060368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.060409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.060839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.060882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.061288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.061619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.061659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.061996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.062037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.062415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.062458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.062835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.062877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.063278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.063319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.063644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.064039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.064082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.064319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.064359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.064725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.064764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.065171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.065214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.065594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.065635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.065930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.066365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.066744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.066786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.067035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.067076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.067310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.067356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.067755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.067795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.068173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.068559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.068601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.068986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.069028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.069463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.069503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.069894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.070265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.070305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.070700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.070943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.070984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.071328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.071368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.071740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.071781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.072163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.072206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.072592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.072633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.072976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.073020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.073398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.334 [2024-12-09 05:31:38.073438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.334 qpair failed and we were unable to recover it. 00:38:24.334 [2024-12-09 05:31:38.073830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.073872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.074250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.074291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.074637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.075065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.075109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.075936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.075978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.076343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.076383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.076619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.076663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.077039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.077082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.077339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.077379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.077628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.078046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.078089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.078328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.078368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.078765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.079168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.079210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.079462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.079503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.079913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.079956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.080316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.080357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.080642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.081034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.081076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.081450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.081491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.082152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.082193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.082445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.082796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.082852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.083278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.083635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.083676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.084044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.084087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.084462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.084503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.084880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.084922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.085286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.085327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.085568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.085607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.085832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.086256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.086296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.086535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.086574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.086944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.086985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.087344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.087384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.335 [2024-12-09 05:31:38.087769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.335 [2024-12-09 05:31:38.087809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.335 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.088184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.088225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.088581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.088622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.088964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.089007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.089370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.089410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.089833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.090193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.090235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.090629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.090670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.091065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.091106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.091357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.091399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.091679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.092053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.092094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.092506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.092548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.092794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.092845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.093293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.093674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.093715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.094107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.094399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.094438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.094662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.094703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.094972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.095014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.095290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.095330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 Malloc0 00:38:24.336 [2024-12-09 05:31:38.095586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.095626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.095887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.095928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.096160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.096200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.336 [2024-12-09 05:31:38.096447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.096487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.096623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:24.336 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.336 [2024-12-09 05:31:38.097071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.336 [2024-12-09 05:31:38.097359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.097399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.097787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.097841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.098200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.098241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.098630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.098672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.336 qpair failed and we were unable to recover it. 00:38:24.336 [2024-12-09 05:31:38.099056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.336 [2024-12-09 05:31:38.099099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.099496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.099919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.099962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.100338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.100378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.100648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.100693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.101054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.101511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.101928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.101970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.102108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.102158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.102503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.102545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.102829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.102873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.103074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.337 [2024-12-09 05:31:38.103245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.103285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.103662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.103702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.104060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.104102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.104218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.104256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.104533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.104577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.104826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.104872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.105248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.105289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.105671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.105712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.106103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.106145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.106514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.106806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.106858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.107313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.107354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.107602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.107643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.108008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.108050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.108411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.108451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.108874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.109185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.109224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.109606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.109973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.110359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.110400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.110762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.110803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.111171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.111214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.111553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.111593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.111857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.111906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.112304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.112345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 [2024-12-09 05:31:38.112601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.337 [2024-12-09 05:31:38.112641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.337 qpair failed and we were unable to recover it. 00:38:24.337 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:24.337 [2024-12-09 05:31:38.113026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.113067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.338 [2024-12-09 05:31:38.113354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.338 [2024-12-09 05:31:38.113394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.113770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.113811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.114235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.114616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.114656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.115091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.115479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.115519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.115944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.115987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.116369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.116792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.116842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.117217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.117258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.117637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.117677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.117938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.117979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.118225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.118265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.118622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.118662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.118878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.118919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.119165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.119205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.119565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.119605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.119985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.120028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.120573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.120614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.120774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.120824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.121082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.121129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.121537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.121577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.121825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.121868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.122278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.122318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.122721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.122763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.123144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.123186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.123648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.123688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.124029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.124071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.124423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.338 [2024-12-09 05:31:38.124464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.124681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.124720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:24.338 [2024-12-09 05:31:38.125085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.125128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.338 [2024-12-09 05:31:38.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.125543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.125906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.125950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.338 [2024-12-09 05:31:38.126281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.338 [2024-12-09 05:31:38.126322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.338 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.126579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.126619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.126864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.127266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.127308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.127422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.127460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.127699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.127739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.128199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.128242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.128636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.128677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.129028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.129074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.129331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.129371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.129800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.129849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.130211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.130251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.130648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.130690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.130942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.130984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.131104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.131145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.131544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.131585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.131886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.131929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.132217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.132565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.132961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.133003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.133135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.133179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.133485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.133527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.133969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.134373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.134415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.134756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.134796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.135188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.135236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.135486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.135525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.135725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.135766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.136153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.136195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.339 [2024-12-09 05:31:38.136566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.136607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.339 [2024-12-09 05:31:38.137052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.137094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.339 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.339 [2024-12-09 05:31:38.137472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.137513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.137766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.137806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.138053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.138094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.138480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.138523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.138928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.138971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.139146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.339 [2024-12-09 05:31:38.139186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.339 qpair failed and we were unable to recover it. 00:38:24.339 [2024-12-09 05:31:38.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.139491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.139747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.139788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.140170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.140596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.140635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.140981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.141024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.141380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.141420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.141703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.141744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.142097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.142139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.142512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.142551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.142970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.143011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.143400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.340 [2024-12-09 05:31:38.143441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000394700 with addr=10.0.0.2, port=4420 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.143666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.340 [2024-12-09 05:31:38.154782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.154959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.155033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.155074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.155105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.155185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.340 05:31:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1830777 00:38:24.340 [2024-12-09 05:31:38.164391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.164508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.164549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.164573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.164591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.164640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.174261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.174354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.174383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.174400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.174412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.174443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.184127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.184201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.184223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.184234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.184244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.184266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.194390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.194462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.194483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.194494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.194504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.194526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.204373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.204449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.204471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.204482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.204492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.340 [2024-12-09 05:31:38.204514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.340 qpair failed and we were unable to recover it. 00:38:24.340 [2024-12-09 05:31:38.214417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.340 [2024-12-09 05:31:38.214496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.340 [2024-12-09 05:31:38.214517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.340 [2024-12-09 05:31:38.214528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.340 [2024-12-09 05:31:38.214538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.214559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.224229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.224325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.224350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.224362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.224372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.224396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.234497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.234583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.234609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.234621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.234631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.234653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.244470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.244540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.244562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.244573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.244583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.244605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.254531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.254600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.254621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.254633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.254642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.254664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.264355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.264432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.264453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.264465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.264475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.264496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.274567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.274637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.274658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.274670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.274682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.274705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.284502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.284571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.284592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.284603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.284613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.284634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.294656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.294765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.294787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.294799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.294808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.294835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.304452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.304523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.304544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.304556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.304565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.304587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.341 [2024-12-09 05:31:38.314666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.341 [2024-12-09 05:31:38.314737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.341 [2024-12-09 05:31:38.314757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.341 [2024-12-09 05:31:38.314769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.341 [2024-12-09 05:31:38.314778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.341 [2024-12-09 05:31:38.314800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.341 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.324702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.324770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.324790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.324802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.324811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.324840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.334629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.334701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.334722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.334733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.334743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.334765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.344563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.344631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.344659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.344670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.344680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.344704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.354774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.354858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.354879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.354892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.354901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.354923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.364825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.364896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.364920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.364931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.364941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.364964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.374840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.374910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.374931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.374942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.374952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.374974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.384662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.384734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.384755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.384766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.384776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.384797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.394871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.394947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.394967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.394979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.394988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.395011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.404917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.405000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.405020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.405032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.405045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.405067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.414918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.415002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.415023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.415036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.415045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.415068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.424770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.424850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.603 [2024-12-09 05:31:38.424872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.603 [2024-12-09 05:31:38.424884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.603 [2024-12-09 05:31:38.424893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.603 [2024-12-09 05:31:38.424916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.603 qpair failed and we were unable to recover it. 00:38:24.603 [2024-12-09 05:31:38.435147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.603 [2024-12-09 05:31:38.435222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.435243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.435255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.435264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.435286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.444921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.444994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.445015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.445026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.445036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.445057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.454971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.455063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.455085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.455096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.455105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.455126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.464889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.464957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.464978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.464991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.465001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.465023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.475019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.475098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.475119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.475131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.475140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.475162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.485173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.485243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.485264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.485275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.485285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.485307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.495107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.495194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.495215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.495226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.495235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.495259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.504969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.505041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.505062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.505073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.505083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.505105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.515187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.515260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.515281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.515292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.515301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.515323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.525217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.525292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.525313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.525325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.525334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.525356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.535243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.535316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.535337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.535353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.535362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.535384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.545130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.545204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.545225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.545236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.545246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.545269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.555317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.555410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.555431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.604 [2024-12-09 05:31:38.555442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.604 [2024-12-09 05:31:38.555451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.604 [2024-12-09 05:31:38.555473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.604 qpair failed and we were unable to recover it. 00:38:24.604 [2024-12-09 05:31:38.565303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.604 [2024-12-09 05:31:38.565372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.604 [2024-12-09 05:31:38.565393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.605 [2024-12-09 05:31:38.565404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.605 [2024-12-09 05:31:38.565414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.605 [2024-12-09 05:31:38.565436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.605 qpair failed and we were unable to recover it. 00:38:24.605 [2024-12-09 05:31:38.575345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.605 [2024-12-09 05:31:38.575416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.605 [2024-12-09 05:31:38.575437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.605 [2024-12-09 05:31:38.575449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.605 [2024-12-09 05:31:38.575458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.605 [2024-12-09 05:31:38.575483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.605 qpair failed and we were unable to recover it. 00:38:24.605 [2024-12-09 05:31:38.585199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.605 [2024-12-09 05:31:38.585265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.605 [2024-12-09 05:31:38.585286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.605 [2024-12-09 05:31:38.585298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.605 [2024-12-09 05:31:38.585307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.605 [2024-12-09 05:31:38.585329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.605 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.595367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.595435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.595456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.595468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.595477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.595505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.605461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.605536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.605558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.605570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.605580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.605602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.615501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.615572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.615593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.615604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.615613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.615635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.625341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.625436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.625458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.625469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.625478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.625499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.635522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.635599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.635620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.635632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.635641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.635663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.645491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.645561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.645582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.645594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.645604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.645629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.655562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.655636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.655658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.655669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.655678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.655700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.665323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.665414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.665439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.665450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.665459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.665482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.675653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.675728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.675749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.675760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.675770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.675795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.685683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.685749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.685771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.685782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.685791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.685813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.695689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.695754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.695776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.695787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.695796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.695825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.705527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.867 [2024-12-09 05:31:38.705625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.867 [2024-12-09 05:31:38.705647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.867 [2024-12-09 05:31:38.705658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.867 [2024-12-09 05:31:38.705667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.867 [2024-12-09 05:31:38.705691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.867 qpair failed and we were unable to recover it. 00:38:24.867 [2024-12-09 05:31:38.715733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.715799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.715826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.715838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.715847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.715870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.725776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.725848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.725869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.725880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.725891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.725912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.735791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.735872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.735893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.735906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.735915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.735937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.745537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.745625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.745646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.745658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.745667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.745688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.755871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.755943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.755963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.755975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.755985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.756006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.765787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.765872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.765893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.765905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.765915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.765937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.775897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.776018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.776039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.776051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.776060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.776082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.785775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.785845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.785867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.785878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.785888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.785910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.795985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.796054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.796078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.796090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.796099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.796122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.806002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.806081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.806102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.806114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.806123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.806145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.816026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.816100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.816121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.816132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.816142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.816164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.825870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.825938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.825959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.825970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.825981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.826003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.868 qpair failed and we were unable to recover it. 00:38:24.868 [2024-12-09 05:31:38.836112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.868 [2024-12-09 05:31:38.836181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.868 [2024-12-09 05:31:38.836202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.868 [2024-12-09 05:31:38.836214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.868 [2024-12-09 05:31:38.836226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.868 [2024-12-09 05:31:38.836250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.869 qpair failed and we were unable to recover it. 00:38:24.869 [2024-12-09 05:31:38.846081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.869 [2024-12-09 05:31:38.846150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.869 [2024-12-09 05:31:38.846170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.869 [2024-12-09 05:31:38.846182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.869 [2024-12-09 05:31:38.846192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.869 [2024-12-09 05:31:38.846214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.869 qpair failed and we were unable to recover it. 00:38:24.869 [2024-12-09 05:31:38.856125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.869 [2024-12-09 05:31:38.856193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.869 [2024-12-09 05:31:38.856215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.869 [2024-12-09 05:31:38.856234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.869 [2024-12-09 05:31:38.856243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:24.869 [2024-12-09 05:31:38.856265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.869 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.866102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.866172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.866193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.866204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.866213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.866235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.876167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.876239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.876260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.876272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.876282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.876304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.886227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.886341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.886362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.886373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.886383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.886404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.896226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.896299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.896321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.896332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.896342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.896363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.906003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.906065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.906087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.906098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.906107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.906129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.916300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.916366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.916387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.916398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.916409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.916430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.926269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.926334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.926358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.926369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.926379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.926400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.936297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.936366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.936387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.936398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.936408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.936429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.946103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.946215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.946237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.946248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.946257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.946278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.956388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.956457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.956477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.131 [2024-12-09 05:31:38.956488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.131 [2024-12-09 05:31:38.956498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.131 [2024-12-09 05:31:38.956520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-09 05:31:38.966434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.131 [2024-12-09 05:31:38.966503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.131 [2024-12-09 05:31:38.966524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:38.966539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:38.966547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:38.966569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:38.976443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:38.976520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:38.976541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:38.976554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:38.976563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:38.976587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:38.986278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:38.986344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:38.986365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:38.986377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:38.986386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:38.986409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:38.996474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:38.996565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:38.996587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:38.996599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:38.996608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:38.996630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.006495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.006579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.006611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.006626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.006636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.006667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.016558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.016631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.016655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.016667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.016677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.016701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.026362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.026427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.026448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.026460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.026469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.026492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.036561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.036631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.036652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.036663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.036673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.036695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.046524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.046628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.046650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.046661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.046670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.046692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.056562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.056633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.056655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.056666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.056676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.056699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.066461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.066527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.066548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.066560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.066570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.066593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.076623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.076690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.076712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.076723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.076733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.076755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.086669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.086742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.086763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.132 [2024-12-09 05:31:39.086774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.132 [2024-12-09 05:31:39.086784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.132 [2024-12-09 05:31:39.086806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-09 05:31:39.096737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.132 [2024-12-09 05:31:39.096832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.132 [2024-12-09 05:31:39.096853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.133 [2024-12-09 05:31:39.096868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.133 [2024-12-09 05:31:39.096878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.133 [2024-12-09 05:31:39.096900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-09 05:31:39.106559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.133 [2024-12-09 05:31:39.106661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.133 [2024-12-09 05:31:39.106683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.133 [2024-12-09 05:31:39.106694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.133 [2024-12-09 05:31:39.106703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.133 [2024-12-09 05:31:39.106726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-09 05:31:39.116889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.133 [2024-12-09 05:31:39.116963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.133 [2024-12-09 05:31:39.116984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.133 [2024-12-09 05:31:39.116995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.133 [2024-12-09 05:31:39.117004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.133 [2024-12-09 05:31:39.117026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.395 [2024-12-09 05:31:39.126936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.395 [2024-12-09 05:31:39.127022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.395 [2024-12-09 05:31:39.127043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.395 [2024-12-09 05:31:39.127055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.395 [2024-12-09 05:31:39.127064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.395 [2024-12-09 05:31:39.127087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.395 qpair failed and we were unable to recover it. 00:38:25.395 [2024-12-09 05:31:39.136913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.395 [2024-12-09 05:31:39.136996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.395 [2024-12-09 05:31:39.137017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.395 [2024-12-09 05:31:39.137029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.395 [2024-12-09 05:31:39.137038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.395 [2024-12-09 05:31:39.137064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.395 qpair failed and we were unable to recover it. 00:38:25.395 [2024-12-09 05:31:39.146661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.146729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.146750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.146761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.146770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.146792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.156914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.157009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.157031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.157043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.157052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.157074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.166943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.167072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.167094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.167106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.167115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.167136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.176949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.177026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.177047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.177059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.177069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.177091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.186804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.186880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.186901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.186912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.186921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.186944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.197013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.197083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.197104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.197115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.197124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.197146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.207047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.207114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.207135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.207147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.207156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.207178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.216987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.217054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.217077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.217089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.217099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.217122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.226904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.226971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.226996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.227008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.227017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.227040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.237098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.396 [2024-12-09 05:31:39.237169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.396 [2024-12-09 05:31:39.237191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.396 [2024-12-09 05:31:39.237202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.396 [2024-12-09 05:31:39.237211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.396 [2024-12-09 05:31:39.237233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.396 qpair failed and we were unable to recover it. 00:38:25.396 [2024-12-09 05:31:39.247071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.247138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.247160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.247171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.247181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.247203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.257185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.257254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.257274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.257285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.257295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.257318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.266983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.267045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.267066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.267077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.267086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.267112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.277228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.277300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.277321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.277332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.277342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.277363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.287154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.287224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.287246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.287257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.287267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.287288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.297254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.297324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.297345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.297356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.297365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.297387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.307124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.307185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.307207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.307218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.307227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.307251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.317339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.317426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.317446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.317458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.317468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.317490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.327373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.327444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.327465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.327476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.327486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.327508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.337446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.337511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.337531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.337543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.337552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.337578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.347242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.347317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.347349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.347364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.347375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.347403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.357434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.357510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.357538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.357551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.357561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.357586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.367482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.397 [2024-12-09 05:31:39.367555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.397 [2024-12-09 05:31:39.367578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.397 [2024-12-09 05:31:39.367589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.397 [2024-12-09 05:31:39.367605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.397 [2024-12-09 05:31:39.367628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.397 qpair failed and we were unable to recover it. 00:38:25.397 [2024-12-09 05:31:39.377518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.398 [2024-12-09 05:31:39.377595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.398 [2024-12-09 05:31:39.377627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.398 [2024-12-09 05:31:39.377642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.398 [2024-12-09 05:31:39.377653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.398 [2024-12-09 05:31:39.377681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.398 qpair failed and we were unable to recover it. 00:38:25.398 [2024-12-09 05:31:39.387308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.398 [2024-12-09 05:31:39.387404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.398 [2024-12-09 05:31:39.387427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.398 [2024-12-09 05:31:39.387440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.398 [2024-12-09 05:31:39.387449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.398 [2024-12-09 05:31:39.387474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.398 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.397537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.397612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.397633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.397645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.397658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.397682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.407547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.407620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.407642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.407653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.407663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.407685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.417600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.417668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.417689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.417701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.417710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.417733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.427422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.427485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.427506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.427517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.427527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.427548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.437654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.437723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.437743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.437755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.437764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.437787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.447643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.447719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.447740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.447752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.447765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.447788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.457694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.457821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.457844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.457855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.457864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.457887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.467531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.467595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.467616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.467628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.467637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.467660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.477763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.477880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.477902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.477913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.477923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.477945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.487704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.487766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.487791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.487803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.487812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.660 [2024-12-09 05:31:39.487841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.660 qpair failed and we were unable to recover it. 00:38:25.660 [2024-12-09 05:31:39.497800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.660 [2024-12-09 05:31:39.497875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.660 [2024-12-09 05:31:39.497896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.660 [2024-12-09 05:31:39.497908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.660 [2024-12-09 05:31:39.497918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.497940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.507585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.507647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.507668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.507680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.507689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.507711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.517891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.517962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.517983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.517994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.518003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.518026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.527873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.527938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.527959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.527974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.527984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.528006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.537868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.537950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.537972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.537983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.537992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.538015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.547767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.547843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.547865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.547877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.547886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.547908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.557904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.557972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.557993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.558004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.558013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.558035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.567994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.568066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.568088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.568100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.568109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.568131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.578019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.578087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.578108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.578119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.578129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.578151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.587810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.587885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.587906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.587918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.587927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.587950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.598014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.598082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.598103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.598114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.598123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.598145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.608101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.608163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.608184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.608196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.608205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.608227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.618122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.618196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.618217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.618228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.618237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.661 [2024-12-09 05:31:39.618259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.661 qpair failed and we were unable to recover it. 00:38:25.661 [2024-12-09 05:31:39.627935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.661 [2024-12-09 05:31:39.628001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.661 [2024-12-09 05:31:39.628027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.661 [2024-12-09 05:31:39.628039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.661 [2024-12-09 05:31:39.628048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.662 [2024-12-09 05:31:39.628070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.662 qpair failed and we were unable to recover it. 00:38:25.662 [2024-12-09 05:31:39.638222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.662 [2024-12-09 05:31:39.638322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.662 [2024-12-09 05:31:39.638343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.662 [2024-12-09 05:31:39.638355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.662 [2024-12-09 05:31:39.638364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.662 [2024-12-09 05:31:39.638385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.662 qpair failed and we were unable to recover it. 00:38:25.662 [2024-12-09 05:31:39.648190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.662 [2024-12-09 05:31:39.648291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.662 [2024-12-09 05:31:39.648312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.662 [2024-12-09 05:31:39.648323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.662 [2024-12-09 05:31:39.648333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.662 [2024-12-09 05:31:39.648355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.662 qpair failed and we were unable to recover it. 00:38:25.923 [2024-12-09 05:31:39.658235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.923 [2024-12-09 05:31:39.658306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.923 [2024-12-09 05:31:39.658327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.923 [2024-12-09 05:31:39.658343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.923 [2024-12-09 05:31:39.658352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.923 [2024-12-09 05:31:39.658374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.923 qpair failed and we were unable to recover it. 00:38:25.923 [2024-12-09 05:31:39.668131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.923 [2024-12-09 05:31:39.668197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.923 [2024-12-09 05:31:39.668218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.923 [2024-12-09 05:31:39.668229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.923 [2024-12-09 05:31:39.668238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.923 [2024-12-09 05:31:39.668264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.923 qpair failed and we were unable to recover it. 00:38:25.923 [2024-12-09 05:31:39.678274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.923 [2024-12-09 05:31:39.678354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.923 [2024-12-09 05:31:39.678375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.678387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.678396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.678419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.688311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.688384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.688405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.688417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.688426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.688448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.698318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.698416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.698437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.698448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.698457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.698483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.708185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.708248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.708269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.708280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.708289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.708312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.718404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.718477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.718498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.718509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.718519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.718541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.728348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.728439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.728462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.728476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.728485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.728509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.738448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.738518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.738539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.738551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.738560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.738581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.748216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.748282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.748304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.748316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.748326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.748350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.758510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.758590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.758611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.758622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.758632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.758654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.768529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.768606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.768637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.768652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.768663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.768691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.778540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.778614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.778637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.778651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.778661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.778686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.788400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.788465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.788493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.788505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.788514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.788537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.798611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.798691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.798712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.924 [2024-12-09 05:31:39.798725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.924 [2024-12-09 05:31:39.798734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.924 [2024-12-09 05:31:39.798756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.924 qpair failed and we were unable to recover it. 00:38:25.924 [2024-12-09 05:31:39.808639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.924 [2024-12-09 05:31:39.808717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.924 [2024-12-09 05:31:39.808738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.808749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.808759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.808781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.818664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.818748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.818769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.818780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.818789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.818812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.828430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.828499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.828521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.828532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.828545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.828567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.838689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.838788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.838810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.838827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.838837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.838859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.848769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.848846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.848867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.848880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.848889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.848911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.858756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.858827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.858848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.858860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.858870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.858892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.868693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.868788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.868810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.868827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.868837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.868860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.878828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.878902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.878924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.878935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.878945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.878975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.888828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.888897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.888918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.888930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.888940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.888962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.898879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.898947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.898967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.898979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.898988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.899009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:25.925 [2024-12-09 05:31:39.908662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:25.925 [2024-12-09 05:31:39.908728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:25.925 [2024-12-09 05:31:39.908748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:25.925 [2024-12-09 05:31:39.908760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:25.925 [2024-12-09 05:31:39.908769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:25.925 [2024-12-09 05:31:39.908790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.925 qpair failed and we were unable to recover it. 00:38:26.187 [2024-12-09 05:31:39.918943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.187 [2024-12-09 05:31:39.919032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.187 [2024-12-09 05:31:39.919056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.187 [2024-12-09 05:31:39.919067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.187 [2024-12-09 05:31:39.919076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.919097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.928969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.929050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.929072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.929084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.929093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.929115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.938901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.938978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.938999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.939011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.939020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.939041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.948811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.948892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.948913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.948924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.948933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.948955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.959039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.959112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.959133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.959144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.959157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.959179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.969108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.969177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.969198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.969209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.969218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.969240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.979082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.979154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.979174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.979186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.979194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.979217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.988913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.988980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.989001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.989012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.989021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.989043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:39.999152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:39.999220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:39.999241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:39.999252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:39.999261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:39.999286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:40.009177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:40.009255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:40.009277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:40.009289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:40.009298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:40.009320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:40.019153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:40.019249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:40.019275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:40.019288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:40.019298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:40.019323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:40.028952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:40.029023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:40.029045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:40.029057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:40.029067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:40.029090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:40.039175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:40.039246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:40.039268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:40.039279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.188 [2024-12-09 05:31:40.039289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.188 [2024-12-09 05:31:40.039312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.188 qpair failed and we were unable to recover it. 00:38:26.188 [2024-12-09 05:31:40.049217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.188 [2024-12-09 05:31:40.049289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.188 [2024-12-09 05:31:40.049314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.188 [2024-12-09 05:31:40.049326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.049336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.049358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.059321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.059387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.059408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.059420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.059428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.059450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.069137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.069199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.069221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.069232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.069241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.069264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.079284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.079352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.079373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.079384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.079394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.079416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.089397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.089473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.089493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.089509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.089518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.089540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.099327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.099394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.099415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.099427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.099436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.099458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.109242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.109304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.109325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.109336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.109346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.109368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.119493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.119568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.119589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.119601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.119610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.119632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.129484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.129557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.129578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.129589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.129598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.129621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.139683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.139752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.139773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.139791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.139800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.139827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.149355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.149422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.149443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.149454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.149463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.149484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.159566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.159636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.159657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.159668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.159677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.159699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.169613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.169685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.169707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.169718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.169727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.189 [2024-12-09 05:31:40.169749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.189 qpair failed and we were unable to recover it. 00:38:26.189 [2024-12-09 05:31:40.179601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.189 [2024-12-09 05:31:40.179669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.189 [2024-12-09 05:31:40.179691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.189 [2024-12-09 05:31:40.179703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.189 [2024-12-09 05:31:40.179713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.190 [2024-12-09 05:31:40.179735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.190 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.189468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.189574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.189596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.189608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.189617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.189639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.199681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.199752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.199773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.199784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.199793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.199820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.209620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.209687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.209708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.209719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.209728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.209750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.219778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.219853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.219876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.219891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.219900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.219923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.229464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.229536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.229557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.229569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.229578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.229599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.239799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.239888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.239909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.239920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.239929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.451 [2024-12-09 05:31:40.239952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-12-09 05:31:40.249563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.451 [2024-12-09 05:31:40.249664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.451 [2024-12-09 05:31:40.249685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.451 [2024-12-09 05:31:40.249696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.451 [2024-12-09 05:31:40.249706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.249728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.259873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.259941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.259962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.259974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.259983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.260009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.269671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.269738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.269759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.269771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.269780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.269802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.279881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.279952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.279973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.279985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.279994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.280016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.289650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.289722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.289743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.289754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.289763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.289785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.299950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.300016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.300037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.300049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.300058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.300080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.309684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.309747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.309768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.309780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.309788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.309810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.320011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.320080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.320101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.320112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.320121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.320143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.329836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.329901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.329923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.329934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.329943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.329968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.339953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.340021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.340042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.340054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.340063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.340085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.349880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.349962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.349986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.349997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.350006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.350028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.360132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.360200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.360221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.360233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.360242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.360263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.369928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.370040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.452 [2024-12-09 05:31:40.370062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.452 [2024-12-09 05:31:40.370073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.452 [2024-12-09 05:31:40.370082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.452 [2024-12-09 05:31:40.370104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-12-09 05:31:40.380211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.452 [2024-12-09 05:31:40.380279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.380299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.380310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.380319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.380341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.389950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.390013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.390033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.390044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.390057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.390079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.400211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.400295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.400315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.400326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.400336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.400357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.409964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.410028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.410049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.410060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.410069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.410091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.420266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.420382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.420402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.420413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.420423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.420445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.430129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.430209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.430230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.430242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.430251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.430273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-12-09 05:31:40.440326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.453 [2024-12-09 05:31:40.440404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.453 [2024-12-09 05:31:40.440425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.453 [2024-12-09 05:31:40.440436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.453 [2024-12-09 05:31:40.440446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.453 [2024-12-09 05:31:40.440467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.450160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.450226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.450247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.450259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.450268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.450289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.460386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.460447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.460468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.460479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.460488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.460511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.470170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.470275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.470296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.470308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.470317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.470339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.480432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.480496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.480522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.480534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.480543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.480565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.490292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.490359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.490379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.490391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.490401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.490422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.500428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.500494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.500515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.500526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.500536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.500558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.510284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.510357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.510389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.510405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.510416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.510444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.520521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.714 [2024-12-09 05:31:40.520592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.714 [2024-12-09 05:31:40.520616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.714 [2024-12-09 05:31:40.520628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.714 [2024-12-09 05:31:40.520642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.714 [2024-12-09 05:31:40.520666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-09 05:31:40.530360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.530446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.530478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.530492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.530503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.530531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.540365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.540433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.540457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.540469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.540479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.540503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.550395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.550458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.550480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.550492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.550501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.550525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.560631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.560702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.560724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.560736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.560745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.560768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.570451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.570561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.570583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.570595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.570605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.570627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.580504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.580574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.580595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.580606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.580616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.580638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.590516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.590580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.590601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.590612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.590621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.590643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.600740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.600811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.600837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.600849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.600860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.600881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.610558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.610625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.610649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.610660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.610670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.610691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.620614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.620680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.620701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.620712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.620721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.620744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.630627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.630696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.630717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.630728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.630737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.630759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.640833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.640900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.640921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.715 [2024-12-09 05:31:40.640933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.715 [2024-12-09 05:31:40.640944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.715 [2024-12-09 05:31:40.640967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-09 05:31:40.650667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.715 [2024-12-09 05:31:40.650727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.715 [2024-12-09 05:31:40.650748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.650763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.650779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.650802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-09 05:31:40.660684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.716 [2024-12-09 05:31:40.660744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.716 [2024-12-09 05:31:40.660765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.660776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.660785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.660811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-09 05:31:40.670743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.716 [2024-12-09 05:31:40.670848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.716 [2024-12-09 05:31:40.670871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.670882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.670892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.670914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-09 05:31:40.680937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.716 [2024-12-09 05:31:40.681010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.716 [2024-12-09 05:31:40.681031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.681043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.681053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.681076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-09 05:31:40.690777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.716 [2024-12-09 05:31:40.690846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.716 [2024-12-09 05:31:40.690867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.690879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.690888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.690910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-09 05:31:40.700784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.716 [2024-12-09 05:31:40.700881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.716 [2024-12-09 05:31:40.700904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.716 [2024-12-09 05:31:40.700916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.716 [2024-12-09 05:31:40.700925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.716 [2024-12-09 05:31:40.700947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.710761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.710855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.710877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.710888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.710897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.710919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.721107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.721211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.721232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.721243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.721252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.721274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.730865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.730932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.730953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.730964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.730974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.730995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.740893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.740961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.740982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.740994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.741002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.741024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.750930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.750993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.751014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.751025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.751034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.751056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.761150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.761220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.761240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.761252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.761261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.761283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.770995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.771063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.771084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.771096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.771105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.771127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.781044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.781103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.781124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.781139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.781149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.781171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.791039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.791104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.791125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.791137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.977 [2024-12-09 05:31:40.791146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.977 [2024-12-09 05:31:40.791168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.977 qpair failed and we were unable to recover it. 00:38:26.977 [2024-12-09 05:31:40.801376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.977 [2024-12-09 05:31:40.801458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.977 [2024-12-09 05:31:40.801479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.977 [2024-12-09 05:31:40.801491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.801500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.801522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.811012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.811075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.811096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.811107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.811117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.811139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.821168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.821230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.821251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.821262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.821271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.821297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.831145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.831211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.831232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.831243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.831252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.831274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.841316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.841386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.841407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.841419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.841429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.841450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.851289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.851386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.851407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.851418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.851427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.851449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.861207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.861272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.861293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.861305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.861314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.861337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.871163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.871270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.871292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.871304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.871313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.871334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.881502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.881574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.881595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.881607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.881616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.881638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.891329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.891388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.891409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.891420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.891430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.891452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.901342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.901406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.901427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.901439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.901448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.901470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.911365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.911429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.911473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.911485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.911494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.911516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.921652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.921763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.921784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.978 [2024-12-09 05:31:40.921795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.978 [2024-12-09 05:31:40.921804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.978 [2024-12-09 05:31:40.921830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.978 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-09 05:31:40.931418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.978 [2024-12-09 05:31:40.931484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.978 [2024-12-09 05:31:40.931504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.979 [2024-12-09 05:31:40.931516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.979 [2024-12-09 05:31:40.931526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.979 [2024-12-09 05:31:40.931548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-09 05:31:40.941444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.979 [2024-12-09 05:31:40.941540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.979 [2024-12-09 05:31:40.941562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.979 [2024-12-09 05:31:40.941573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.979 [2024-12-09 05:31:40.941582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.979 [2024-12-09 05:31:40.941604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-09 05:31:40.951469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.979 [2024-12-09 05:31:40.951537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.979 [2024-12-09 05:31:40.951557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.979 [2024-12-09 05:31:40.951569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.979 [2024-12-09 05:31:40.951582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.979 [2024-12-09 05:31:40.951604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-09 05:31:40.961690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:26.979 [2024-12-09 05:31:40.961807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:26.979 [2024-12-09 05:31:40.961833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:26.979 [2024-12-09 05:31:40.961845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:26.979 [2024-12-09 05:31:40.961854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:26.979 [2024-12-09 05:31:40.961876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.979 qpair failed and we were unable to recover it. 00:38:27.240 [2024-12-09 05:31:40.971492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.240 [2024-12-09 05:31:40.971554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.240 [2024-12-09 05:31:40.971575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.240 [2024-12-09 05:31:40.971587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.240 [2024-12-09 05:31:40.971595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.240 [2024-12-09 05:31:40.971618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.240 qpair failed and we were unable to recover it. 00:38:27.240 [2024-12-09 05:31:40.981554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.240 [2024-12-09 05:31:40.981619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.240 [2024-12-09 05:31:40.981640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.240 [2024-12-09 05:31:40.981652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.240 [2024-12-09 05:31:40.981662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.240 [2024-12-09 05:31:40.981684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.240 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:40.991580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:40.991641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:40.991662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:40.991673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:40.991682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:40.991710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.001709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.001781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.001802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.001813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.001827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.001849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.011624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.011689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.011710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.011722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.011731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.011753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.021653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.021719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.021739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.021751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.021760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.021782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.031668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.031735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.031756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.031767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.031776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.031798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.041924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.041993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.042017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.042028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.042037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.042059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.051720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.051782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.051803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.051819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.051829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.051851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.061735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.061799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.061824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.061836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.061845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.061867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.071688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.071794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.071822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.071850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.071860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.071882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.082029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.082096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.082117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.082128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.082141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.082164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.091846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.091949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.091970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.091982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.091991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.092014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.101835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.101900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.101921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.101932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.101941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.241 [2024-12-09 05:31:41.101964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.241 qpair failed and we were unable to recover it. 00:38:27.241 [2024-12-09 05:31:41.111932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.241 [2024-12-09 05:31:41.111998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.241 [2024-12-09 05:31:41.112019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.241 [2024-12-09 05:31:41.112030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.241 [2024-12-09 05:31:41.112039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.112062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.122049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.122155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.122176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.122187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.122196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.122218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.131961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.132033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.132054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.132065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.132074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.132096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.141974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.142033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.142054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.142066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.142075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.142098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.151991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.152057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.152079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.152090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.152099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.152121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.162161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.162245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.162266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.162278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.162287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.162317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.172017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.172087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.172111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.172123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.172133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.172155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.182080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.182178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.182199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.182210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.182219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.182241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.192092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.192158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.192179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.192190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.192199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.192221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.202376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.202480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.202501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.202513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.202522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.202544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.212163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.212225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.212246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.212261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.212270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.212292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.222181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.222246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.222268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.222279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.222289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.222312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.242 [2024-12-09 05:31:41.232227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.242 [2024-12-09 05:31:41.232291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.242 [2024-12-09 05:31:41.232312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.242 [2024-12-09 05:31:41.232323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.242 [2024-12-09 05:31:41.232332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.242 [2024-12-09 05:31:41.232354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.242 qpair failed and we were unable to recover it. 00:38:27.505 [2024-12-09 05:31:41.242451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.505 [2024-12-09 05:31:41.242520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.505 [2024-12-09 05:31:41.242541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.505 [2024-12-09 05:31:41.242553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.505 [2024-12-09 05:31:41.242562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.505 [2024-12-09 05:31:41.242584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.505 qpair failed and we were unable to recover it. 00:38:27.505 [2024-12-09 05:31:41.252265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.505 [2024-12-09 05:31:41.252327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.505 [2024-12-09 05:31:41.252348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.505 [2024-12-09 05:31:41.252359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.505 [2024-12-09 05:31:41.252368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.505 [2024-12-09 05:31:41.252394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.505 qpair failed and we were unable to recover it. 00:38:27.505 [2024-12-09 05:31:41.262345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.505 [2024-12-09 05:31:41.262424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.505 [2024-12-09 05:31:41.262444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.505 [2024-12-09 05:31:41.262456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.505 [2024-12-09 05:31:41.262465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.505 [2024-12-09 05:31:41.262488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.505 qpair failed and we were unable to recover it. 00:38:27.505 [2024-12-09 05:31:41.272337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.505 [2024-12-09 05:31:41.272399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.505 [2024-12-09 05:31:41.272420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.505 [2024-12-09 05:31:41.272431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.272440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.272462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.282547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.282615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.282637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.282650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.282659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.282682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.292367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.292431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.292452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.292464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.292473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.292495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.302323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.302424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.302446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.302457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.302466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.302489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.312437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.312498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.312520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.312531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.312540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.312563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.322672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.322743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.322764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.322775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.322785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.322810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.332472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.332533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.332554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.332565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.332574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.332596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.342507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.342610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.342631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.342646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.342655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.342678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.352539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.352606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.352626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.352638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.352647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.352669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.362753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.362828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.362849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.362860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.362870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.362892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.372612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.372680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.372701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.372712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.372721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.372744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.382535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.382597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.382618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.382630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.382638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.382666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.392690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.392757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.392778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.506 [2024-12-09 05:31:41.392790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.506 [2024-12-09 05:31:41.392799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.506 [2024-12-09 05:31:41.392826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.506 qpair failed and we were unable to recover it. 00:38:27.506 [2024-12-09 05:31:41.402878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.506 [2024-12-09 05:31:41.402953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.506 [2024-12-09 05:31:41.402974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.402985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.402994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.403016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.412665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.412729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.412750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.412762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.412771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.412793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.422606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.422671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.422692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.422709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.422719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.422741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.432654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.432720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.432741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.432752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.432761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.432783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.442864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.442941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.442963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.442974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.442983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.443004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.452770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.452844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.452865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.452876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.452886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.452908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.462796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.462865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.462886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.462897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.462908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.462930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.472827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.472893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.472919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.472931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.472941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.472963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.483070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.483168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.483189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.483200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.483209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.483232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.507 [2024-12-09 05:31:41.492866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.507 [2024-12-09 05:31:41.492928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.507 [2024-12-09 05:31:41.492949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.507 [2024-12-09 05:31:41.492960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.507 [2024-12-09 05:31:41.492969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.507 [2024-12-09 05:31:41.492992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.507 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.502937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.502998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.503019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.503030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.503039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.503062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.512921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.512986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.513007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.513019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.513032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.513055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.523159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.523229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.523249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.523262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.523271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.523293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.533001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.533106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.533127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.533139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.533148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.533169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.543039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.543106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.543127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.543138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.543147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.543169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.553054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.553120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.553141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.553152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.553161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.769 [2024-12-09 05:31:41.553184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-12-09 05:31:41.563308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.769 [2024-12-09 05:31:41.563376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.769 [2024-12-09 05:31:41.563397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.769 [2024-12-09 05:31:41.563408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.769 [2024-12-09 05:31:41.563418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.563439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.573048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.573110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.573131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.573142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.573151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.573175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.583156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.583251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.583272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.583283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.583293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.583314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.593195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.593284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.593306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.593318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.593328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.593349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.603308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.603382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.603406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.603418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.603428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.603449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.613199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.613263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.613285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.613296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.613305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.613327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.623326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.623395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.623417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.623428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.623437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.623460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.633310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.633393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.633414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.633426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.633436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.633458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.643508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.643578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.643599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.643610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.643624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.643647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.653322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.653406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.653438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.653453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.653463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.653495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.663352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.663418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.663442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.663455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.663465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.663489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.673383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.673451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.673473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.673484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.673494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.673517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.683531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.683601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.683623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.683635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.770 [2024-12-09 05:31:41.683645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.770 [2024-12-09 05:31:41.683667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-12-09 05:31:41.693444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.770 [2024-12-09 05:31:41.693517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.770 [2024-12-09 05:31:41.693548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.770 [2024-12-09 05:31:41.693564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.693575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.693603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.703460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.703565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.703590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.703602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.703613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.703637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.713518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.713600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.713631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.713646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.713656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.713685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.723737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.723806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.723837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.723849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.723858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.723883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.733467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.733566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.733591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.733604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.733613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.733636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.743599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.743708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.743729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.743741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.743750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.743771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-12-09 05:31:41.753595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:27.771 [2024-12-09 05:31:41.753664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:27.771 [2024-12-09 05:31:41.753685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:27.771 [2024-12-09 05:31:41.753696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:27.771 [2024-12-09 05:31:41.753706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:27.771 [2024-12-09 05:31:41.753728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.771 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.763853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.763925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.763946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.033 [2024-12-09 05:31:41.763958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.033 [2024-12-09 05:31:41.763968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.033 [2024-12-09 05:31:41.763990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.033 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.773585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.773650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.773671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.033 [2024-12-09 05:31:41.773686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.033 [2024-12-09 05:31:41.773696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.033 [2024-12-09 05:31:41.773717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.033 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.783619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.783726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.783748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.033 [2024-12-09 05:31:41.783760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.033 [2024-12-09 05:31:41.783769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.033 [2024-12-09 05:31:41.783792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.033 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.793709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.793773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.793794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.033 [2024-12-09 05:31:41.793805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.033 [2024-12-09 05:31:41.793819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.033 [2024-12-09 05:31:41.793842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.033 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.803860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.803929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.803950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.033 [2024-12-09 05:31:41.803961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.033 [2024-12-09 05:31:41.803971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.033 [2024-12-09 05:31:41.803993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.033 qpair failed and we were unable to recover it. 00:38:28.033 [2024-12-09 05:31:41.813801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.033 [2024-12-09 05:31:41.813880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.033 [2024-12-09 05:31:41.813901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.813914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.813923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.813949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.823780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.823875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.823897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.823908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.823917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.823939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.833805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.833879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.833899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.833911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.833920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.833942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.844035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.844105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.844126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.844137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.844147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.844169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.853873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.853974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.853997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.854008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.854017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.854039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.863901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.863968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.863990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.864001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.864011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.864033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.873979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.874048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.874069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.874081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.874091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.874114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.884157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.884253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.884274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.884286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.884295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.884317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.893957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.894023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.894044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.894055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.894064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.894086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.904055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.904164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.904185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.904200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.904209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.904231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.914041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.914108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.914130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.914141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.914150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.914173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.924270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.924348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.924369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.924385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.924394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.924417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.934081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.934144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.034 [2024-12-09 05:31:41.934165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.034 [2024-12-09 05:31:41.934177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.034 [2024-12-09 05:31:41.934192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.034 [2024-12-09 05:31:41.934214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.034 qpair failed and we were unable to recover it. 00:38:28.034 [2024-12-09 05:31:41.944115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.034 [2024-12-09 05:31:41.944177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.944198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.944209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.944219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.944244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:41.954056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:41.954120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.954141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.954152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.954161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.954182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:41.964365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:41.964436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.964457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.964468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.964477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.964499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:41.974170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:41.974237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.974258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.974269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.974278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.974300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:41.984140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:41.984238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.984259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.984272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.984281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.984306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:41.994245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:41.994308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:41.994329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:41.994340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:41.994349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:41.994371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:42.004497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:42.004570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:42.004592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:42.004603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:42.004612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:42.004635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:42.014334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:42.014400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:42.014421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:42.014432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:42.014441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:42.014463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.035 [2024-12-09 05:31:42.024324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.035 [2024-12-09 05:31:42.024389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.035 [2024-12-09 05:31:42.024410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.035 [2024-12-09 05:31:42.024422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.035 [2024-12-09 05:31:42.024431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.035 [2024-12-09 05:31:42.024453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.035 qpair failed and we were unable to recover it. 00:38:28.297 [2024-12-09 05:31:42.034345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.297 [2024-12-09 05:31:42.034409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.297 [2024-12-09 05:31:42.034433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.297 [2024-12-09 05:31:42.034445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.297 [2024-12-09 05:31:42.034454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.297 [2024-12-09 05:31:42.034475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.297 qpair failed and we were unable to recover it. 00:38:28.297 [2024-12-09 05:31:42.044604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.297 [2024-12-09 05:31:42.044678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.297 [2024-12-09 05:31:42.044699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.297 [2024-12-09 05:31:42.044710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.297 [2024-12-09 05:31:42.044719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.297 [2024-12-09 05:31:42.044741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.054392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.054466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.054498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.054512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.054523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.054551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.064410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.064487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.064518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.064533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.064543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.064571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.074470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.074557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.074589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.074603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.074618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.074646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.084692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.084766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.084790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.084802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.084812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.084842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.094493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.094553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.094575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.094587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.094596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.094618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.104529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.104591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.104613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.104625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.104634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.104657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.114468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.114531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.114552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.114564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.114573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.114595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.124711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.124782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.124803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.124819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.124828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.124850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.134606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.134670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.134692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.134703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.134712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.134735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.144613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.144676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.144698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.144709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.144719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.144741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.154670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.154738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.154760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.154771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.154781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.154803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.164901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.164990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.165016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.165027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.165037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.165059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.298 [2024-12-09 05:31:42.174727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.298 [2024-12-09 05:31:42.174793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.298 [2024-12-09 05:31:42.174819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.298 [2024-12-09 05:31:42.174831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.298 [2024-12-09 05:31:42.174840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.298 [2024-12-09 05:31:42.174863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.298 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.184745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.184805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.184831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.184842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.184851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.184872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.194792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.194864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.194891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.194902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.194912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.194934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.205006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.205070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.205091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.205102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.205115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.205137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.214841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.214904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.214926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.214937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.214946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.214968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.224822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.224890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.224915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.224927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.224936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.224960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.234840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.234907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.234929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.234940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.234949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.234972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.245127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.245198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.245219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.245230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.245239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.245262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.254921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.254987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.255008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.255020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.255029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.255052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.264962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.265030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.265051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.265062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.265071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.265093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.275045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.275112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.275133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.275145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.275153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.275175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.299 [2024-12-09 05:31:42.285243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.299 [2024-12-09 05:31:42.285346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.299 [2024-12-09 05:31:42.285368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.299 [2024-12-09 05:31:42.285379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.299 [2024-12-09 05:31:42.285388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.299 [2024-12-09 05:31:42.285409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.299 qpair failed and we were unable to recover it. 00:38:28.561 [2024-12-09 05:31:42.295017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.561 [2024-12-09 05:31:42.295085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.561 [2024-12-09 05:31:42.295109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.561 [2024-12-09 05:31:42.295121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.561 [2024-12-09 05:31:42.295130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.561 [2024-12-09 05:31:42.295151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.561 qpair failed and we were unable to recover it. 00:38:28.561 [2024-12-09 05:31:42.305065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.561 [2024-12-09 05:31:42.305129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.561 [2024-12-09 05:31:42.305150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.561 [2024-12-09 05:31:42.305161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.561 [2024-12-09 05:31:42.305169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.561 [2024-12-09 05:31:42.305191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.561 qpair failed and we were unable to recover it. 00:38:28.561 [2024-12-09 05:31:42.315110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.561 [2024-12-09 05:31:42.315177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.561 [2024-12-09 05:31:42.315198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.561 [2024-12-09 05:31:42.315209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.561 [2024-12-09 05:31:42.315219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.561 [2024-12-09 05:31:42.315244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.561 qpair failed and we were unable to recover it. 00:38:28.561 [2024-12-09 05:31:42.325311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.561 [2024-12-09 05:31:42.325384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.561 [2024-12-09 05:31:42.325406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.561 [2024-12-09 05:31:42.325417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.561 [2024-12-09 05:31:42.325427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.561 [2024-12-09 05:31:42.325450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.561 qpair failed and we were unable to recover it. 00:38:28.561 [2024-12-09 05:31:42.335181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.561 [2024-12-09 05:31:42.335242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.561 [2024-12-09 05:31:42.335263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.561 [2024-12-09 05:31:42.335278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.561 [2024-12-09 05:31:42.335286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.561 [2024-12-09 05:31:42.335308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.561 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.345217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.345277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.345298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.345309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.345318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.345340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.355233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.355295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.355316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.355327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.355337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.355358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.365449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.365516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.365537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.365548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.365557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.365579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.375284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.375352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.375373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.375385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.375394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.375419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.385221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.385287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.385308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.385319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.385329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.385350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.395332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.395401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.395422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.395433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.395442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.395463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.405570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.405643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.405664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.405675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.405685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.405708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.415287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.415349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.415370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.415381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.415390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.415412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.425352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.425417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.425439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.425450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.425459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.425480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.435341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.435406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.435428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.435439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.435448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.435470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.445774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.445845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.445866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.445878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.445887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.445914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.455486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.455551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.455572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.455583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.455592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.455614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.562 [2024-12-09 05:31:42.465492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.562 [2024-12-09 05:31:42.465549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.562 [2024-12-09 05:31:42.465571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.562 [2024-12-09 05:31:42.465587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.562 [2024-12-09 05:31:42.465596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.562 [2024-12-09 05:31:42.465618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.562 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.475446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.475516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.475537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.475549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.475558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.475579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.485763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.485847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.485879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.485893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.485903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.485932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.495600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.495670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.495694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.495706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.495716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.495739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.505596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.505660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.505682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.505694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.505703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.505729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.515632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.515697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.515719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.515730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.515739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.515762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.525905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.525975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.525997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.526008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.526017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.526039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.535656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.535718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.535739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.535750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.535760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.535782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.563 [2024-12-09 05:31:42.545798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.563 [2024-12-09 05:31:42.545867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.563 [2024-12-09 05:31:42.545888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.563 [2024-12-09 05:31:42.545899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.563 [2024-12-09 05:31:42.545908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.563 [2024-12-09 05:31:42.545930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.563 qpair failed and we were unable to recover it. 00:38:28.825 [2024-12-09 05:31:42.555665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.825 [2024-12-09 05:31:42.555727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.825 [2024-12-09 05:31:42.555748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.825 [2024-12-09 05:31:42.555759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.825 [2024-12-09 05:31:42.555768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.825 [2024-12-09 05:31:42.555789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.825 qpair failed and we were unable to recover it. 00:38:28.825 [2024-12-09 05:31:42.565903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.825 [2024-12-09 05:31:42.565974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.825 [2024-12-09 05:31:42.565995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.825 [2024-12-09 05:31:42.566006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.825 [2024-12-09 05:31:42.566015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.825 [2024-12-09 05:31:42.566037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.825 qpair failed and we were unable to recover it. 00:38:28.825 [2024-12-09 05:31:42.575883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.825 [2024-12-09 05:31:42.575951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.825 [2024-12-09 05:31:42.575972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.825 [2024-12-09 05:31:42.575984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.825 [2024-12-09 05:31:42.575993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.825 [2024-12-09 05:31:42.576015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.825 qpair failed and we were unable to recover it. 00:38:28.825 [2024-12-09 05:31:42.585742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.825 [2024-12-09 05:31:42.585807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.825 [2024-12-09 05:31:42.585834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.585845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.585854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.585876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.595753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.595819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.595843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.595855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.595864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.595886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.606082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.606170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.606191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.606202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.606211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.606233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.615911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.616006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.616027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.616039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.616048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.616069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.625957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.626026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.626047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.626058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.626067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.626088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.636046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.636111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.636132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.636143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.636155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.636177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.646267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.646374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.646395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.646406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.646415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.646442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.656039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.656103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.656124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.656135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.656144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.656166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.666055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.666117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.666137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.666149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.666157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.666179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.676114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.676180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.676201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.676212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.676221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.676244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.686323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.686391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.686412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.686424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.686433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.686454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.696077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.696182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.696204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.696215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.696224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.696245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.706192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.706259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.706280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.826 [2024-12-09 05:31:42.706297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.826 [2024-12-09 05:31:42.706307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.826 [2024-12-09 05:31:42.706331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.826 qpair failed and we were unable to recover it. 00:38:28.826 [2024-12-09 05:31:42.716217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.826 [2024-12-09 05:31:42.716280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.826 [2024-12-09 05:31:42.716301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.716315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.716326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.716349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.726441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.726556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.726581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.726592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.726601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.726624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.736285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.736348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.736369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.736380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.736389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.736412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.746288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.746352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.746374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.746385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.746395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.746418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.756328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.756393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.756414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.756425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.756434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.756456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.766551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.766630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.766651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.766663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.766675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.766697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.776405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.776488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.776519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.776534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.776544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.776572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.786400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.786484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.786516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.786531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.786542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.786570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.796451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.796519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.796543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.796555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.796565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.796588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.806657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.806731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.806753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.806764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.806774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.806797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:28.827 [2024-12-09 05:31:42.816488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.827 [2024-12-09 05:31:42.816564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.827 [2024-12-09 05:31:42.816585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.827 [2024-12-09 05:31:42.816598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.827 [2024-12-09 05:31:42.816607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:28.827 [2024-12-09 05:31:42.816629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.827 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.826505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.826567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.826588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.826600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.826609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.826631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.836506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.836571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.836593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.836604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.836613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.836635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.846782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.846855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.846877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.846889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.846898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.846920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.856564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.856626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.856647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.856658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.856668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.856690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.866614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.866680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.866701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.866713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.866722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.866744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.876604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.089 [2024-12-09 05:31:42.876673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.089 [2024-12-09 05:31:42.876694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.089 [2024-12-09 05:31:42.876706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.089 [2024-12-09 05:31:42.876715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.089 [2024-12-09 05:31:42.876737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.089 qpair failed and we were unable to recover it. 00:38:29.089 [2024-12-09 05:31:42.886863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.886942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.886963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.886974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.886983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.887006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.896667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.896729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.896750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.896766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.896775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.896798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.906683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.906742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.906764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.906775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.906784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.906807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.916790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.916864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.916886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.916897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.916907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.916928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.926953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.927023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.927045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.927056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.927066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.927087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.936762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.936831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.936853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.936864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.936874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.936899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.946720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.946835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.946857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.946870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.946879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.946902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.956850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.956914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.956936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.956947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.956956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.956978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.967071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.967163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.967185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.967197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.967206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.967229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.976914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.976978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.977000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.977012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.977022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.977049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.986916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.987001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.987023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.987035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.987045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.987067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:42.996938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:42.997000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:42.997021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:42.997032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:42.997042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:42.997064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:43.007179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.090 [2024-12-09 05:31:43.007249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.090 [2024-12-09 05:31:43.007271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.090 [2024-12-09 05:31:43.007282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.090 [2024-12-09 05:31:43.007291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.090 [2024-12-09 05:31:43.007312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.090 qpair failed and we were unable to recover it. 00:38:29.090 [2024-12-09 05:31:43.016985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.017053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.017075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.017087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.017096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.017117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.026989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.027068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.027089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.027105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.027114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.027135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.037072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.037171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.037192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.037203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.037213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.037234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.047291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.047356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.047377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.047389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.047398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.047420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.057114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.057179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.057200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.057211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.057220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.057243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.067157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.067216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.067237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.067248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.067257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.067283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.091 [2024-12-09 05:31:43.077201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.091 [2024-12-09 05:31:43.077279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.091 [2024-12-09 05:31:43.077301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.091 [2024-12-09 05:31:43.077312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.091 [2024-12-09 05:31:43.077321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.091 [2024-12-09 05:31:43.077343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.091 qpair failed and we were unable to recover it. 00:38:29.352 [2024-12-09 05:31:43.087394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.352 [2024-12-09 05:31:43.087474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.352 [2024-12-09 05:31:43.087495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.352 [2024-12-09 05:31:43.087507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.352 [2024-12-09 05:31:43.087516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.352 [2024-12-09 05:31:43.087538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.352 qpair failed and we were unable to recover it. 00:38:29.352 [2024-12-09 05:31:43.097205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.352 [2024-12-09 05:31:43.097279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.352 [2024-12-09 05:31:43.097300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.352 [2024-12-09 05:31:43.097311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.352 [2024-12-09 05:31:43.097321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.352 [2024-12-09 05:31:43.097342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.352 qpair failed and we were unable to recover it. 00:38:29.352 [2024-12-09 05:31:43.107137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.352 [2024-12-09 05:31:43.107197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.107219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.107230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.107240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.107262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.117203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.117268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.117289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.117301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.117310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.117334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.127424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.127498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.127519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.127530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.127539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.127561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.137318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.137386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.137407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.137419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.137428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.137449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.147358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.147420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.147441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.147453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.147462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.147484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.157413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.157483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.157509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.157520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.157530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.157552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.167606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.167680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.167713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.167727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.167738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.167766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.177435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.177503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.177526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.177539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.177549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.177573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.187537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.187603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.187625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.187637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.187647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.187670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.197532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.197641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.197673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.197688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.197702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.197733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.207702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.207803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.207832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.207845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.207855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.207879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.217541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.217603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.217624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.217636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.217651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.217674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.227574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.227639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.353 [2024-12-09 05:31:43.227661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.353 [2024-12-09 05:31:43.227673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.353 [2024-12-09 05:31:43.227682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.353 [2024-12-09 05:31:43.227705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.353 qpair failed and we were unable to recover it. 00:38:29.353 [2024-12-09 05:31:43.237492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.353 [2024-12-09 05:31:43.237555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.237577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.237588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.237597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.237620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.247833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.247902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.247923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.247934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.247943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.247965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.257636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.257697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.257718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.257729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.257738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.257760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.267667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.267759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.267781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.267792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.267801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.267839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.277691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.277798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.277825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.277837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.277846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.277868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.287928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.287995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.288020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.288031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.288041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.288063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.297766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.297835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.297856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.297867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.297876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.297899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.307701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.307775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.307795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.307807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.307820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.307846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.317822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.317888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.317910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.317921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.317930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.317953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.327961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.328042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.328063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.328074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.328087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.328110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.354 [2024-12-09 05:31:43.337854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.354 [2024-12-09 05:31:43.337921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.354 [2024-12-09 05:31:43.337942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.354 [2024-12-09 05:31:43.337954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.354 [2024-12-09 05:31:43.337963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.354 [2024-12-09 05:31:43.337985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.354 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.347890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.347969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.347990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.348002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.348011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.348034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.357931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.358007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.358028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.358040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.358049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.358071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.368177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.368252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.368273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.368285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.368295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.368317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.377907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.377985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.378007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.378018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.378028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.378050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.388000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.388065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.388086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.388097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.388106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.388129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.398065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.398133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.398154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.398165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.398175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.398197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.408279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.408366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.408387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.408399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.408408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.408430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.418082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.418153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.418174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.418185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.418195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.418217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.428104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.428167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.428187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.428199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.428208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.428231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.438151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.438253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.438275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.438286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.438295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.616 [2024-12-09 05:31:43.438317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.616 qpair failed and we were unable to recover it. 00:38:29.616 [2024-12-09 05:31:43.448331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.616 [2024-12-09 05:31:43.448437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.616 [2024-12-09 05:31:43.448458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.616 [2024-12-09 05:31:43.448470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.616 [2024-12-09 05:31:43.448479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.448500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.458201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.458268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.458290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.458310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.458320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.458343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.468137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.468202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.468223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.468235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.468244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.468267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.478250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.478316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.478343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.478354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.478364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.478386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.488504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.488596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.488617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.488629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.488638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.488659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.498292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.498355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.498376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.498388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.498397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.498423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.508399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.508471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.508492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.508504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.508513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.508535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.518369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.518432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.518453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.518465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.518474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.518496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.528585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.528648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.528669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.528680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.528689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.528711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.538480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.538540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.538561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.538572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.538581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.538604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.548449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.548517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.548538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.548549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.548559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.548581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.558491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.558556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.558576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.558587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.558596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.558618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.568707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.568781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.568802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.568814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.617 [2024-12-09 05:31:43.568828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.617 [2024-12-09 05:31:43.568851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.617 qpair failed and we were unable to recover it. 00:38:29.617 [2024-12-09 05:31:43.578521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.617 [2024-12-09 05:31:43.578587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.617 [2024-12-09 05:31:43.578608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.617 [2024-12-09 05:31:43.578619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.618 [2024-12-09 05:31:43.578629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.618 [2024-12-09 05:31:43.578651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.618 qpair failed and we were unable to recover it. 00:38:29.618 [2024-12-09 05:31:43.588459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.618 [2024-12-09 05:31:43.588519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.618 [2024-12-09 05:31:43.588540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.618 [2024-12-09 05:31:43.588555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.618 [2024-12-09 05:31:43.588564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.618 [2024-12-09 05:31:43.588586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.618 qpair failed and we were unable to recover it. 00:38:29.618 [2024-12-09 05:31:43.598565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.618 [2024-12-09 05:31:43.598630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.618 [2024-12-09 05:31:43.598651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.618 [2024-12-09 05:31:43.598662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.618 [2024-12-09 05:31:43.598671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.618 [2024-12-09 05:31:43.598694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.618 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.608625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.608696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.608717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.608728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.608737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.879 [2024-12-09 05:31:43.608759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.618546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.618612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.618633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.618644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.618654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.879 [2024-12-09 05:31:43.618676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.628658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.628757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.628779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.628790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.628800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.879 [2024-12-09 05:31:43.628829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.638672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.638745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.638767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.638780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.638791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000394700 00:38:29.879 [2024-12-09 05:31:43.638823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.648942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.649083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.649166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.649209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.649242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:38:29.879 [2024-12-09 05:31:43.649325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 [2024-12-09 05:31:43.658749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.879 [2024-12-09 05:31:43.658867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.879 [2024-12-09 05:31:43.658914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.879 [2024-12-09 05:31:43.658940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.879 [2024-12-09 05:31:43.658961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:38:29.879 [2024-12-09 05:31:43.659012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:29.879 qpair failed and we were unable to recover it. 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Read completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.879 Write completed with error (sct=0, sc=8) 00:38:29.879 starting I/O failed 00:38:29.880 Write completed with error (sct=0, sc=8) 00:38:29.880 starting I/O failed 00:38:29.880 Read completed with error (sct=0, sc=8) 00:38:29.880 starting I/O failed 00:38:29.880 Read completed with error (sct=0, sc=8) 00:38:29.880 starting I/O failed 00:38:29.880 Write completed with error (sct=0, sc=8) 00:38:29.880 starting I/O failed 00:38:29.880 Read completed with error (sct=0, sc=8) 00:38:29.880 starting I/O failed 00:38:29.880 [2024-12-09 05:31:43.660746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:29.880 [2024-12-09 05:31:43.668854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.880 [2024-12-09 05:31:43.668995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.880 [2024-12-09 05:31:43.669060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.880 [2024-12-09 05:31:43.669097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.880 [2024-12-09 05:31:43.669126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:38:29.880 [2024-12-09 05:31:43.669193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:29.880 qpair failed and we were unable to recover it. 00:38:29.880 [2024-12-09 05:31:43.679024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.880 [2024-12-09 05:31:43.679139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.880 [2024-12-09 05:31:43.679195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.880 [2024-12-09 05:31:43.679228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.880 [2024-12-09 05:31:43.679247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:38:29.880 [2024-12-09 05:31:43.679290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:29.880 qpair failed and we were unable to recover it. 00:38:29.880 [2024-12-09 05:31:43.680097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000393d00 is same with the state(6) to be set 00:38:29.880 [2024-12-09 05:31:43.689056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.880 [2024-12-09 05:31:43.689192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.880 [2024-12-09 05:31:43.689272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.880 [2024-12-09 05:31:43.689313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.880 [2024-12-09 05:31:43.689344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003c0080 00:38:29.880 [2024-12-09 05:31:43.689421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.880 qpair failed and we were unable to recover it. 00:38:29.880 [2024-12-09 05:31:43.699087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.880 [2024-12-09 05:31:43.699220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.880 [2024-12-09 05:31:43.699291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.880 [2024-12-09 05:31:43.699326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.880 [2024-12-09 05:31:43.699353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003c0080 00:38:29.880 [2024-12-09 05:31:43.699422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.880 qpair failed and we were unable to recover it. 00:38:29.880 [2024-12-09 05:31:43.700697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393d00 (9): Bad file descriptor 00:38:29.880 Initializing NVMe Controllers 00:38:29.880 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:29.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:29.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:29.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:29.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:29.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:29.880 Initialization complete. Launching workers. 00:38:29.880 Starting thread on core 1 00:38:29.880 Starting thread on core 2 00:38:29.880 Starting thread on core 3 00:38:29.880 Starting thread on core 0 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:29.880 00:38:29.880 real 0m11.704s 00:38:29.880 user 0m21.134s 00:38:29.880 sys 0m4.033s 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:29.880 ************************************ 00:38:29.880 END TEST nvmf_target_disconnect_tc2 00:38:29.880 ************************************ 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:29.880 rmmod nvme_tcp 00:38:29.880 rmmod nvme_fabrics 00:38:29.880 rmmod nvme_keyring 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1831455 ']' 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1831455 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1831455 ']' 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1831455 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.880 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831455 00:38:30.140 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:30.140 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:30.140 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831455' 00:38:30.140 killing process with pid 1831455 00:38:30.140 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1831455 00:38:30.140 05:31:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1831455 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:30.710 05:31:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.257 05:31:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:33.257 00:38:33.257 real 0m22.791s 00:38:33.257 user 0m51.193s 00:38:33.257 sys 0m10.480s 00:38:33.257 05:31:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.257 05:31:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:33.257 ************************************ 00:38:33.257 END TEST nvmf_target_disconnect 00:38:33.257 ************************************ 00:38:33.257 05:31:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:33.257 00:38:33.257 real 8m20.362s 00:38:33.257 user 18m23.072s 00:38:33.257 sys 2m32.459s 00:38:33.258 05:31:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.258 05:31:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.258 ************************************ 00:38:33.258 END TEST nvmf_host 00:38:33.258 ************************************ 00:38:33.258 05:31:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:33.258 05:31:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:33.258 05:31:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:33.258 05:31:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:33.258 05:31:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.258 05:31:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.258 ************************************ 00:38:33.258 START TEST nvmf_target_core_interrupt_mode 00:38:33.258 ************************************ 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:33.258 * Looking for test storage... 00:38:33.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.258 --rc genhtml_branch_coverage=1 00:38:33.258 --rc genhtml_function_coverage=1 00:38:33.258 --rc genhtml_legend=1 00:38:33.258 --rc geninfo_all_blocks=1 00:38:33.258 --rc geninfo_unexecuted_blocks=1 00:38:33.258 00:38:33.258 ' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.258 --rc genhtml_branch_coverage=1 00:38:33.258 --rc genhtml_function_coverage=1 00:38:33.258 --rc genhtml_legend=1 00:38:33.258 --rc geninfo_all_blocks=1 00:38:33.258 --rc geninfo_unexecuted_blocks=1 00:38:33.258 00:38:33.258 ' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.258 --rc genhtml_branch_coverage=1 00:38:33.258 --rc genhtml_function_coverage=1 00:38:33.258 --rc genhtml_legend=1 00:38:33.258 --rc geninfo_all_blocks=1 00:38:33.258 --rc geninfo_unexecuted_blocks=1 00:38:33.258 00:38:33.258 ' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.258 --rc genhtml_branch_coverage=1 00:38:33.258 --rc genhtml_function_coverage=1 00:38:33.258 --rc genhtml_legend=1 00:38:33.258 --rc geninfo_all_blocks=1 00:38:33.258 --rc geninfo_unexecuted_blocks=1 00:38:33.258 00:38:33.258 ' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.258 05:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:33.259 ************************************ 00:38:33.259 START TEST nvmf_abort 00:38:33.259 ************************************ 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:33.259 * Looking for test storage... 00:38:33.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:33.259 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:33.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.520 --rc genhtml_branch_coverage=1 00:38:33.520 --rc genhtml_function_coverage=1 00:38:33.520 --rc genhtml_legend=1 00:38:33.520 --rc geninfo_all_blocks=1 00:38:33.520 --rc geninfo_unexecuted_blocks=1 00:38:33.520 00:38:33.520 ' 00:38:33.520 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:33.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.520 --rc genhtml_branch_coverage=1 00:38:33.520 --rc genhtml_function_coverage=1 00:38:33.520 --rc genhtml_legend=1 00:38:33.520 --rc geninfo_all_blocks=1 00:38:33.520 --rc geninfo_unexecuted_blocks=1 00:38:33.520 00:38:33.520 ' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:33.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.521 --rc genhtml_branch_coverage=1 00:38:33.521 --rc genhtml_function_coverage=1 00:38:33.521 --rc genhtml_legend=1 00:38:33.521 --rc geninfo_all_blocks=1 00:38:33.521 --rc geninfo_unexecuted_blocks=1 00:38:33.521 00:38:33.521 ' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:33.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.521 --rc genhtml_branch_coverage=1 00:38:33.521 --rc genhtml_function_coverage=1 00:38:33.521 --rc genhtml_legend=1 00:38:33.521 --rc geninfo_all_blocks=1 00:38:33.521 --rc geninfo_unexecuted_blocks=1 00:38:33.521 00:38:33.521 ' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:33.521 05:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:41.734 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.734 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:41.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:41.735 Found net devices under 0000:31:00.0: cvl_0_0 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:41.735 Found net devices under 0000:31:00.1: cvl_0_1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:38:41.735 00:38:41.735 --- 10.0.0.2 ping statistics --- 00:38:41.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.735 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:38:41.735 00:38:41.735 --- 10.0.0.1 ping statistics --- 00:38:41.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.735 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1837244 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1837244 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1837244 ']' 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.735 05:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.735 [2024-12-09 05:31:54.716474] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:41.735 [2024-12-09 05:31:54.719145] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:41.735 [2024-12-09 05:31:54.719245] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.735 [2024-12-09 05:31:54.881523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:41.735 [2024-12-09 05:31:55.006981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.735 [2024-12-09 05:31:55.007043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.735 [2024-12-09 05:31:55.007058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.735 [2024-12-09 05:31:55.007070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.735 [2024-12-09 05:31:55.007082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.735 [2024-12-09 05:31:55.009803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.735 [2024-12-09 05:31:55.009915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.735 [2024-12-09 05:31:55.009942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:41.736 [2024-12-09 05:31:55.288514] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:41.736 [2024-12-09 05:31:55.289248] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:41.736 [2024-12-09 05:31:55.289283] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:41.736 [2024-12-09 05:31:55.289562] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 [2024-12-09 05:31:55.535450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 Malloc0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 Delay0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 [2024-12-09 05:31:55.691213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.736 05:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:41.997 [2024-12-09 05:31:55.912048] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:44.540 Initializing NVMe Controllers 00:38:44.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:44.540 controller IO queue size 128 less than required 00:38:44.540 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:44.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:44.540 Initialization complete. Launching workers. 00:38:44.540 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27360 00:38:44.540 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27421, failed to submit 66 00:38:44.540 success 27360, unsuccessful 61, failed 0 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:44.540 rmmod nvme_tcp 00:38:44.540 rmmod nvme_fabrics 00:38:44.540 rmmod nvme_keyring 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1837244 ']' 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1837244 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1837244 ']' 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1837244 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837244 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837244' 00:38:44.540 killing process with pid 1837244 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1837244 00:38:44.540 05:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1837244 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.111 05:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:47.656 00:38:47.656 real 0m14.076s 00:38:47.656 user 0m12.876s 00:38:47.656 sys 0m6.887s 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:47.656 ************************************ 00:38:47.656 END TEST nvmf_abort 00:38:47.656 ************************************ 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:47.656 ************************************ 00:38:47.656 START TEST nvmf_ns_hotplug_stress 00:38:47.656 ************************************ 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:47.656 * Looking for test storage... 00:38:47.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.656 --rc genhtml_branch_coverage=1 00:38:47.656 --rc genhtml_function_coverage=1 00:38:47.656 --rc genhtml_legend=1 00:38:47.656 --rc geninfo_all_blocks=1 00:38:47.656 --rc geninfo_unexecuted_blocks=1 00:38:47.656 00:38:47.656 ' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.656 --rc genhtml_branch_coverage=1 00:38:47.656 --rc genhtml_function_coverage=1 00:38:47.656 --rc genhtml_legend=1 00:38:47.656 --rc geninfo_all_blocks=1 00:38:47.656 --rc geninfo_unexecuted_blocks=1 00:38:47.656 00:38:47.656 ' 00:38:47.656 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.656 --rc genhtml_branch_coverage=1 00:38:47.657 --rc genhtml_function_coverage=1 00:38:47.657 --rc genhtml_legend=1 00:38:47.657 --rc geninfo_all_blocks=1 00:38:47.657 --rc geninfo_unexecuted_blocks=1 00:38:47.657 00:38:47.657 ' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.657 --rc genhtml_branch_coverage=1 00:38:47.657 --rc genhtml_function_coverage=1 00:38:47.657 --rc genhtml_legend=1 00:38:47.657 --rc geninfo_all_blocks=1 00:38:47.657 --rc geninfo_unexecuted_blocks=1 00:38:47.657 00:38:47.657 ' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:47.657 05:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.794 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:55.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:55.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:55.795 Found net devices under 0000:31:00.0: cvl_0_0 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:55.795 Found net devices under 0000:31:00.1: cvl_0_1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:38:55.795 00:38:55.795 --- 10.0.0.2 ping statistics --- 00:38:55.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.795 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:38:55.795 00:38:55.795 --- 10.0.0.1 ping statistics --- 00:38:55.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.795 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:55.795 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1842101 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1842101 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1842101 ']' 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.796 05:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.796 [2024-12-09 05:32:08.830939] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:55.796 [2024-12-09 05:32:08.833256] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:55.796 [2024-12-09 05:32:08.833343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.796 [2024-12-09 05:32:08.983847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:55.796 [2024-12-09 05:32:09.084655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.796 [2024-12-09 05:32:09.084699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.796 [2024-12-09 05:32:09.084712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.796 [2024-12-09 05:32:09.084726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.796 [2024-12-09 05:32:09.084739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.796 [2024-12-09 05:32:09.086917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:55.796 [2024-12-09 05:32:09.087179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.796 [2024-12-09 05:32:09.087202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:55.796 [2024-12-09 05:32:09.331589] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.796 [2024-12-09 05:32:09.332268] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.796 [2024-12-09 05:32:09.332342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:55.796 [2024-12-09 05:32:09.332639] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:55.796 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:56.058 [2024-12-09 05:32:09.792481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:56.058 05:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:56.058 05:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:56.319 [2024-12-09 05:32:10.205323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.319 05:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:56.579 05:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:56.840 Malloc0 00:38:56.840 05:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:57.101 Delay0 00:38:57.101 05:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.101 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:57.362 NULL1 00:38:57.362 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:57.622 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:57.622 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1842671 00:38:57.622 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:38:57.622 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.622 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.882 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:57.882 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:58.141 true 00:38:58.141 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:38:58.141 05:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.401 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.401 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:58.401 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:58.662 true 00:38:58.662 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:38:58.662 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.923 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.184 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:59.184 05:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:59.184 true 00:38:59.184 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:38:59.184 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.444 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.704 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:59.704 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:59.963 true 00:38:59.963 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:38:59.963 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.963 05:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.224 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:00.224 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:00.483 true 00:39:00.483 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:00.483 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.743 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.743 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:00.743 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:01.002 true 00:39:01.002 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:01.003 05:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.262 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.262 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:01.262 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:01.522 true 00:39:01.522 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:01.522 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.781 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.042 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:02.042 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:02.042 true 00:39:02.042 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:02.042 05:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.302 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.563 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:02.563 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:02.563 true 00:39:02.563 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:02.563 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.824 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.085 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:03.085 05:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:03.085 true 00:39:03.345 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:03.345 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.345 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.606 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:03.606 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:03.867 true 00:39:03.867 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:03.867 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.867 05:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.127 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:04.127 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:04.387 true 00:39:04.387 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:04.387 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.647 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.647 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:04.648 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:04.908 true 00:39:04.908 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:04.908 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.167 05:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.167 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:05.167 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:05.427 true 00:39:05.427 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:05.427 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.688 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.949 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:05.949 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:05.949 true 00:39:05.949 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:05.949 05:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.209 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.469 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:06.469 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:06.469 true 00:39:06.469 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:06.469 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.729 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.990 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:06.990 05:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:07.252 true 00:39:07.252 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:07.252 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.252 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.512 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:07.512 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:07.772 true 00:39:07.772 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:07.772 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.772 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.032 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:08.032 05:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:08.291 true 00:39:08.291 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:08.292 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.550 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.550 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:08.550 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:08.809 true 00:39:08.809 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:08.809 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.069 05:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.330 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:09.330 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:09.330 true 00:39:09.330 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:09.330 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.591 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.850 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:09.850 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:09.850 true 00:39:09.850 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:09.850 05:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.111 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.371 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:10.372 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:10.372 true 00:39:10.631 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:10.631 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.631 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.891 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:10.891 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:11.151 true 00:39:11.151 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:11.151 05:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.151 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.410 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:11.410 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:11.670 true 00:39:11.670 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:11.670 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.930 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.931 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:11.931 05:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:12.189 true 00:39:12.189 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:12.189 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.448 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.448 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:12.449 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:12.708 true 00:39:12.708 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:12.708 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.967 05:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.227 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:13.227 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:13.227 true 00:39:13.227 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:13.227 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.486 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.745 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:13.745 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:13.745 true 00:39:14.005 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:14.005 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.005 05:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.265 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:14.265 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:14.525 true 00:39:14.525 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:14.525 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.525 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.788 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:14.788 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:15.048 true 00:39:15.048 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:15.048 05:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.308 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.309 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:15.309 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:15.569 true 00:39:15.569 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:15.569 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.830 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.830 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:15.830 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:16.091 true 00:39:16.091 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:16.091 05:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.351 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.612 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:16.612 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:16.612 true 00:39:16.612 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:16.612 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.873 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.133 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:17.133 05:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:17.133 true 00:39:17.133 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:17.133 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.392 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.652 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:17.652 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:17.652 true 00:39:17.652 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:17.652 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.910 05:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.168 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:18.168 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:18.427 true 00:39:18.427 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:18.427 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.427 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.685 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:18.685 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:18.943 true 00:39:18.943 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:18.943 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.201 05:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.201 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:19.201 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:19.459 true 00:39:19.459 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:19.459 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.720 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.720 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:19.720 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:19.980 true 00:39:19.980 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:19.980 05:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.240 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.501 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:20.501 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:20.501 true 00:39:20.501 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:20.501 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.761 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.023 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:21.023 05:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:21.023 true 00:39:21.023 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:21.023 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.283 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.543 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:21.544 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:21.544 true 00:39:21.804 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:21.804 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.804 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.066 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:22.066 05:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:22.325 true 00:39:22.325 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:22.325 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.325 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.587 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:22.587 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:22.848 true 00:39:22.848 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:22.848 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.848 05:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.109 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:23.109 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:23.370 true 00:39:23.370 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:23.370 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.631 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.631 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:23.631 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:23.893 true 00:39:23.893 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:23.893 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.154 05:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.154 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:24.154 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:24.415 true 00:39:24.415 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:24.415 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.676 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.938 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:39:24.938 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:39:24.938 true 00:39:24.938 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:24.938 05:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.199 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.461 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:39:25.461 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:39:25.461 true 00:39:25.723 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:25.723 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.723 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.984 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:39:25.984 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:39:25.984 true 00:39:26.246 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:26.246 05:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.246 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.507 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:39:26.507 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:39:26.769 true 00:39:26.769 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:26.769 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.769 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.030 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:39:27.030 05:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:39:27.292 true 00:39:27.292 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:27.292 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.553 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.553 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:39:27.553 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:39:27.814 true 00:39:27.814 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:27.814 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.814 Initializing NVMe Controllers 00:39:27.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:27.814 Controller IO queue size 128, less than required. 00:39:27.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:27.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:27.814 Initialization complete. Launching workers. 00:39:27.814 ======================================================== 00:39:27.814 Latency(us) 00:39:27.814 Device Information : IOPS MiB/s Average min max 00:39:27.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27268.79 13.31 4694.14 1259.79 46537.41 00:39:27.814 ======================================================== 00:39:27.814 Total : 27268.79 13.31 4694.14 1259.79 46537.41 00:39:27.814 00:39:28.076 05:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.076 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:39:28.076 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:39:28.336 true 00:39:28.336 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1842671 00:39:28.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1842671) - No such process 00:39:28.336 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1842671 00:39:28.336 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.624 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:28.988 null0 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:28.988 null1 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.988 05:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:29.312 null2 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:29.312 null3 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.312 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:29.572 null4 00:39:29.572 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.572 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.572 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:29.833 null5 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:29.834 null6 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.834 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:30.095 null7 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.095 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1848848 1848849 1848851 1848853 1848855 1848857 1848859 1848861 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.096 05:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.357 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.617 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.877 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.878 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.138 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.138 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.139 05:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.139 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.398 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.658 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.918 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.178 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.178 05:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.178 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.440 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.701 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.962 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.222 05:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.222 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.223 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.483 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.744 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.005 rmmod nvme_tcp 00:39:34.005 rmmod nvme_fabrics 00:39:34.005 rmmod nvme_keyring 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1842101 ']' 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1842101 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1842101 ']' 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1842101 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.005 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842101 00:39:34.266 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:34.266 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:34.266 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842101' 00:39:34.266 killing process with pid 1842101 00:39:34.266 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1842101 00:39:34.266 05:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1842101 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.837 05:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.748 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.748 00:39:36.748 real 0m49.518s 00:39:36.748 user 3m4.458s 00:39:36.748 sys 0m21.569s 00:39:36.748 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.748 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:36.748 ************************************ 00:39:36.748 END TEST nvmf_ns_hotplug_stress 00:39:36.748 ************************************ 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.009 ************************************ 00:39:37.009 START TEST nvmf_delete_subsystem 00:39:37.009 ************************************ 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:37.009 * Looking for test storage... 00:39:37.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.009 05:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.009 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.009 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:37.009 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.009 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.009 --rc genhtml_branch_coverage=1 00:39:37.009 --rc genhtml_function_coverage=1 00:39:37.009 --rc genhtml_legend=1 00:39:37.009 --rc geninfo_all_blocks=1 00:39:37.009 --rc geninfo_unexecuted_blocks=1 00:39:37.009 00:39:37.009 ' 00:39:37.009 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.009 --rc genhtml_branch_coverage=1 00:39:37.009 --rc genhtml_function_coverage=1 00:39:37.009 --rc genhtml_legend=1 00:39:37.009 --rc geninfo_all_blocks=1 00:39:37.009 --rc geninfo_unexecuted_blocks=1 00:39:37.009 00:39:37.009 ' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:37.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.271 --rc genhtml_branch_coverage=1 00:39:37.271 --rc genhtml_function_coverage=1 00:39:37.271 --rc genhtml_legend=1 00:39:37.271 --rc geninfo_all_blocks=1 00:39:37.271 --rc geninfo_unexecuted_blocks=1 00:39:37.271 00:39:37.271 ' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:37.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.271 --rc genhtml_branch_coverage=1 00:39:37.271 --rc genhtml_function_coverage=1 00:39:37.271 --rc genhtml_legend=1 00:39:37.271 --rc geninfo_all_blocks=1 00:39:37.271 --rc geninfo_unexecuted_blocks=1 00:39:37.271 00:39:37.271 ' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.271 05:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.409 05:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:45.409 05:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:45.409 05:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:45.409 05:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:45.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:45.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:45.409 Found net devices under 0000:31:00.0: cvl_0_0 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:45.409 Found net devices under 0000:31:00.1: cvl_0_1 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:45.409 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:45.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:45.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:39:45.410 00:39:45.410 --- 10.0.0.2 ping statistics --- 00:39:45.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.410 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:45.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:45.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:39:45.410 00:39:45.410 --- 10.0.0.1 ping statistics --- 00:39:45.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.410 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1854042 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1854042 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1854042 ']' 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:45.410 05:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 [2024-12-09 05:32:58.432116] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:45.410 [2024-12-09 05:32:58.434431] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:39:45.410 [2024-12-09 05:32:58.434517] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.410 [2024-12-09 05:32:58.582698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:45.410 [2024-12-09 05:32:58.682692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.410 [2024-12-09 05:32:58.682733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.410 [2024-12-09 05:32:58.682749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.410 [2024-12-09 05:32:58.682759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.410 [2024-12-09 05:32:58.682771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.410 [2024-12-09 05:32:58.684622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.410 [2024-12-09 05:32:58.684647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.410 [2024-12-09 05:32:58.928455] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.410 [2024-12-09 05:32:58.928549] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.410 [2024-12-09 05:32:58.928737] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 [2024-12-09 05:32:59.253771] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 [2024-12-09 05:32:59.290276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 NULL1 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 Delay0 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.410 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.411 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1854242 00:39:45.411 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:45.411 05:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:45.671 [2024-12-09 05:32:59.467983] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:47.588 05:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:47.588 05:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.588 05:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 [2024-12-09 05:33:01.641303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 Write completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.850 starting I/O failed: -6 00:39:47.850 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 starting I/O failed: -6 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 starting I/O failed: -6 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 [2024-12-09 05:33:01.645225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Read completed with error (sct=0, sc=8) 00:39:47.851 Write completed with error (sct=0, sc=8) 00:39:47.851 [2024-12-09 05:33:01.645939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030a00 is same with the state(6) to be set 00:39:48.793 [2024-12-09 05:33:02.612742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 [2024-12-09 05:33:02.645375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 [2024-12-09 05:33:02.645989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 [2024-12-09 05:33:02.647481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Write completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 Read completed with error (sct=0, sc=8) 00:39:48.793 [2024-12-09 05:33:02.647952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:39:48.793 05:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.793 05:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:48.793 05:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1854242 00:39:48.793 Initializing NVMe Controllers 00:39:48.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:48.793 Controller IO queue size 128, less than required. 00:39:48.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:48.793 Initialization complete. Launching workers. 00:39:48.793 ======================================================== 00:39:48.793 Latency(us) 00:39:48.793 Device Information : IOPS MiB/s Average min max 00:39:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.02 0.08 891767.08 588.15 1010189.53 00:39:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.56 0.08 908905.25 748.60 1014118.28 00:39:48.793 ======================================================== 00:39:48.793 Total : 336.57 0.16 900146.31 588.15 1014118.28 00:39:48.793 00:39:48.793 05:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:48.793 [2024-12-09 05:33:02.651795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:39:48.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1854242 00:39:49.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1854242) - No such process 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1854242 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1854242 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1854242 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.365 [2024-12-09 05:33:03.182214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1855117 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:49.365 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.365 [2024-12-09 05:33:03.317759] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:49.936 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.936 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:49.936 05:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.507 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.507 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:50.507 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.768 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.768 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:50.768 05:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.340 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.340 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:51.340 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.910 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.910 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:51.910 05:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.481 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.481 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:52.481 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.742 Initializing NVMe Controllers 00:39:52.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:52.742 Controller IO queue size 128, less than required. 00:39:52.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:52.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:52.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:52.742 Initialization complete. Launching workers. 00:39:52.742 ======================================================== 00:39:52.742 Latency(us) 00:39:52.742 Device Information : IOPS MiB/s Average min max 00:39:52.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003811.44 1000163.07 1045383.58 00:39:52.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004683.82 1000697.91 1010704.79 00:39:52.742 ======================================================== 00:39:52.742 Total : 256.00 0.12 1004247.63 1000163.07 1045383.58 00:39:52.742 00:39:52.742 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.742 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1855117 00:39:52.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1855117) - No such process 00:39:52.742 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1855117 00:39:52.742 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:53.003 rmmod nvme_tcp 00:39:53.003 rmmod nvme_fabrics 00:39:53.003 rmmod nvme_keyring 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1854042 ']' 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1854042 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1854042 ']' 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1854042 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854042 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854042' 00:39:53.003 killing process with pid 1854042 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1854042 00:39:53.003 05:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1854042 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.574 05:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.125 00:39:56.125 real 0m18.711s 00:39:56.125 user 0m27.529s 00:39:56.125 sys 0m7.453s 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.125 ************************************ 00:39:56.125 END TEST nvmf_delete_subsystem 00:39:56.125 ************************************ 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:56.125 ************************************ 00:39:56.125 START TEST nvmf_host_management 00:39:56.125 ************************************ 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:56.125 * Looking for test storage... 00:39:56.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:56.125 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:56.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.126 --rc genhtml_branch_coverage=1 00:39:56.126 --rc genhtml_function_coverage=1 00:39:56.126 --rc genhtml_legend=1 00:39:56.126 --rc geninfo_all_blocks=1 00:39:56.126 --rc geninfo_unexecuted_blocks=1 00:39:56.126 00:39:56.126 ' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:56.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.126 --rc genhtml_branch_coverage=1 00:39:56.126 --rc genhtml_function_coverage=1 00:39:56.126 --rc genhtml_legend=1 00:39:56.126 --rc geninfo_all_blocks=1 00:39:56.126 --rc geninfo_unexecuted_blocks=1 00:39:56.126 00:39:56.126 ' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:56.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.126 --rc genhtml_branch_coverage=1 00:39:56.126 --rc genhtml_function_coverage=1 00:39:56.126 --rc genhtml_legend=1 00:39:56.126 --rc geninfo_all_blocks=1 00:39:56.126 --rc geninfo_unexecuted_blocks=1 00:39:56.126 00:39:56.126 ' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:56.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.126 --rc genhtml_branch_coverage=1 00:39:56.126 --rc genhtml_function_coverage=1 00:39:56.126 --rc genhtml_legend=1 00:39:56.126 --rc geninfo_all_blocks=1 00:39:56.126 --rc geninfo_unexecuted_blocks=1 00:39:56.126 00:39:56.126 ' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:56.126 05:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:04.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:04.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:04.272 Found net devices under 0000:31:00.0: cvl_0_0 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:04.272 Found net devices under 0000:31:00.1: cvl_0_1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:04.272 05:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:04.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:40:04.272 00:40:04.272 --- 10.0.0.2 ping statistics --- 00:40:04.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.272 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:40:04.272 00:40:04.272 --- 10.0.0.1 ping statistics --- 00:40:04.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.272 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1860364 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1860364 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1860364 ']' 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:04.272 05:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.272 [2024-12-09 05:33:17.266015] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:04.272 [2024-12-09 05:33:17.268299] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:04.272 [2024-12-09 05:33:17.268384] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.272 [2024-12-09 05:33:17.420637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:04.272 [2024-12-09 05:33:17.528085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.272 [2024-12-09 05:33:17.528133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.273 [2024-12-09 05:33:17.528147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.273 [2024-12-09 05:33:17.528157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.273 [2024-12-09 05:33:17.528168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.273 [2024-12-09 05:33:17.530467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:04.273 [2024-12-09 05:33:17.530595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:04.273 [2024-12-09 05:33:17.530690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.273 [2024-12-09 05:33:17.530717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:04.273 [2024-12-09 05:33:17.812223] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:04.273 [2024-12-09 05:33:17.813217] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:04.273 [2024-12-09 05:33:17.814256] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:04.273 [2024-12-09 05:33:17.814344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:04.273 [2024-12-09 05:33:17.814714] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.273 [2024-12-09 05:33:18.079957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.273 Malloc0 00:40:04.273 [2024-12-09 05:33:18.223867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.273 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1860720 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1860720 /var/tmp/bdevperf.sock 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1860720 ']' 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:04.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:04.534 { 00:40:04.534 "params": { 00:40:04.534 "name": "Nvme$subsystem", 00:40:04.534 "trtype": "$TEST_TRANSPORT", 00:40:04.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:04.534 "adrfam": "ipv4", 00:40:04.534 "trsvcid": "$NVMF_PORT", 00:40:04.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:04.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:04.534 "hdgst": ${hdgst:-false}, 00:40:04.534 "ddgst": ${ddgst:-false} 00:40:04.534 }, 00:40:04.534 "method": "bdev_nvme_attach_controller" 00:40:04.534 } 00:40:04.534 EOF 00:40:04.534 )") 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:04.534 05:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:04.534 "params": { 00:40:04.534 "name": "Nvme0", 00:40:04.534 "trtype": "tcp", 00:40:04.534 "traddr": "10.0.0.2", 00:40:04.534 "adrfam": "ipv4", 00:40:04.534 "trsvcid": "4420", 00:40:04.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:04.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:04.534 "hdgst": false, 00:40:04.534 "ddgst": false 00:40:04.534 }, 00:40:04.534 "method": "bdev_nvme_attach_controller" 00:40:04.534 }' 00:40:04.534 [2024-12-09 05:33:18.369107] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:04.534 [2024-12-09 05:33:18.369229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860720 ] 00:40:04.534 [2024-12-09 05:33:18.526736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.793 [2024-12-09 05:33:18.633574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.053 Running I/O for 10 seconds... 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.315 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.316 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.316 [2024-12-09 05:33:19.222040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:05.316 [2024-12-09 05:33:19.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.222959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.222987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.223000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.223014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.223025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.223039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.223050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.223063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.223073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.223086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.316 [2024-12-09 05:33:19.223098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.316 [2024-12-09 05:33:19.223110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.223988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.223998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.224011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.317 [2024-12-09 05:33:19.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.317 [2024-12-09 05:33:19.224045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.318 [2024-12-09 05:33:19.224422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:05.318 [2024-12-09 05:33:19.224479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.318 [2024-12-09 05:33:19.224491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000394700 is same with the state(6) to be set 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.318 [2024-12-09 05:33:19.226001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:05.318 task offset: 32768 on job bdev=Nvme0n1 fails 00:40:05.318 00:40:05.318 Latency(us) 00:40:05.318 [2024-12-09T04:33:19.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:05.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:05.318 Job: Nvme0n1 ended in about 0.22 seconds with error 00:40:05.318 Verification LBA range: start 0x0 length 0x400 00:40:05.318 Nvme0n1 : 0.22 1158.82 72.43 289.71 0.00 41926.95 5379.41 36918.61 00:40:05.318 [2024-12-09T04:33:19.315Z] =================================================================================================================== 00:40:05.318 [2024-12-09T04:33:19.315Z] Total : 1158.82 72.43 289.71 0.00 41926.95 5379.41 36918.61 00:40:05.318 [2024-12-09 05:33:19.230329] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:05.318 [2024-12-09 05:33:19.230372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000393800 (9): Bad file descriptor 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.318 05:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:05.579 [2024-12-09 05:33:19.325150] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:06.521 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1860720 00:40:06.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1860720) - No such process 00:40:06.521 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:06.521 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:06.521 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:06.521 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:06.522 { 00:40:06.522 "params": { 00:40:06.522 "name": "Nvme$subsystem", 00:40:06.522 "trtype": "$TEST_TRANSPORT", 00:40:06.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:06.522 "adrfam": "ipv4", 00:40:06.522 "trsvcid": "$NVMF_PORT", 00:40:06.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:06.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:06.522 "hdgst": ${hdgst:-false}, 00:40:06.522 "ddgst": ${ddgst:-false} 00:40:06.522 }, 00:40:06.522 "method": "bdev_nvme_attach_controller" 00:40:06.522 } 00:40:06.522 EOF 00:40:06.522 )") 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:06.522 05:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:06.522 "params": { 00:40:06.522 "name": "Nvme0", 00:40:06.522 "trtype": "tcp", 00:40:06.522 "traddr": "10.0.0.2", 00:40:06.522 "adrfam": "ipv4", 00:40:06.522 "trsvcid": "4420", 00:40:06.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:06.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:06.522 "hdgst": false, 00:40:06.522 "ddgst": false 00:40:06.522 }, 00:40:06.522 "method": "bdev_nvme_attach_controller" 00:40:06.522 }' 00:40:06.522 [2024-12-09 05:33:20.337164] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:06.522 [2024-12-09 05:33:20.337286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861076 ] 00:40:06.522 [2024-12-09 05:33:20.494621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.783 [2024-12-09 05:33:20.616923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.043 Running I/O for 1 seconds... 00:40:08.429 1844.00 IOPS, 115.25 MiB/s 00:40:08.429 Latency(us) 00:40:08.429 [2024-12-09T04:33:22.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.429 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:08.429 Verification LBA range: start 0x0 length 0x400 00:40:08.429 Nvme0n1 : 1.01 1887.14 117.95 0.00 0.00 33159.28 2129.92 35170.99 00:40:08.429 [2024-12-09T04:33:22.426Z] =================================================================================================================== 00:40:08.429 [2024-12-09T04:33:22.426Z] Total : 1887.14 117.95 0.00 0.00 33159.28 2129.92 35170.99 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.689 rmmod nvme_tcp 00:40:08.689 rmmod nvme_fabrics 00:40:08.689 rmmod nvme_keyring 00:40:08.689 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1860364 ']' 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1860364 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1860364 ']' 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1860364 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860364 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860364' 00:40:08.950 killing process with pid 1860364 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1860364 00:40:08.950 05:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1860364 00:40:09.523 [2024-12-09 05:33:23.361938] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.523 05:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:12.065 00:40:12.065 real 0m15.917s 00:40:12.065 user 0m24.439s 00:40:12.065 sys 0m7.856s 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.065 ************************************ 00:40:12.065 END TEST nvmf_host_management 00:40:12.065 ************************************ 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:12.065 ************************************ 00:40:12.065 START TEST nvmf_lvol 00:40:12.065 ************************************ 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:12.065 * Looking for test storage... 00:40:12.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:12.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.065 --rc genhtml_branch_coverage=1 00:40:12.065 --rc genhtml_function_coverage=1 00:40:12.065 --rc genhtml_legend=1 00:40:12.065 --rc geninfo_all_blocks=1 00:40:12.065 --rc geninfo_unexecuted_blocks=1 00:40:12.065 00:40:12.065 ' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:12.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.065 --rc genhtml_branch_coverage=1 00:40:12.065 --rc genhtml_function_coverage=1 00:40:12.065 --rc genhtml_legend=1 00:40:12.065 --rc geninfo_all_blocks=1 00:40:12.065 --rc geninfo_unexecuted_blocks=1 00:40:12.065 00:40:12.065 ' 00:40:12.065 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:12.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.065 --rc genhtml_branch_coverage=1 00:40:12.066 --rc genhtml_function_coverage=1 00:40:12.066 --rc genhtml_legend=1 00:40:12.066 --rc geninfo_all_blocks=1 00:40:12.066 --rc geninfo_unexecuted_blocks=1 00:40:12.066 00:40:12.066 ' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:12.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.066 --rc genhtml_branch_coverage=1 00:40:12.066 --rc genhtml_function_coverage=1 00:40:12.066 --rc genhtml_legend=1 00:40:12.066 --rc geninfo_all_blocks=1 00:40:12.066 --rc geninfo_unexecuted_blocks=1 00:40:12.066 00:40:12.066 ' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:12.066 05:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:20.200 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:20.201 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:20.201 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:20.201 Found net devices under 0000:31:00.0: cvl_0_0 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:20.201 Found net devices under 0000:31:00.1: cvl_0_1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:20.201 05:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:20.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:20.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:40:20.201 00:40:20.201 --- 10.0.0.2 ping statistics --- 00:40:20.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.201 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:20.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:20.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:40:20.201 00:40:20.201 --- 10.0.0.1 ping statistics --- 00:40:20.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.201 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1865778 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1865778 00:40:20.201 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1865778 ']' 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:20.202 05:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:20.202 [2024-12-09 05:33:33.249538] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:20.202 [2024-12-09 05:33:33.252205] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:20.202 [2024-12-09 05:33:33.252305] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.202 [2024-12-09 05:33:33.403126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:20.202 [2024-12-09 05:33:33.507865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.202 [2024-12-09 05:33:33.507913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.202 [2024-12-09 05:33:33.507927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.202 [2024-12-09 05:33:33.507937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.202 [2024-12-09 05:33:33.507948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.202 [2024-12-09 05:33:33.510135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.202 [2024-12-09 05:33:33.510337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.202 [2024-12-09 05:33:33.510362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:20.202 [2024-12-09 05:33:33.776450] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:20.202 [2024-12-09 05:33:33.777213] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.202 [2024-12-09 05:33:33.777277] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:20.202 [2024-12-09 05:33:33.777533] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.202 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:20.462 [2024-12-09 05:33:34.215611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.462 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:20.722 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:20.722 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:20.983 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:20.983 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:20.983 05:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:21.244 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=db37ed39-d078-40a4-b309-7c61100313bc 00:40:21.244 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db37ed39-d078-40a4-b309-7c61100313bc lvol 20 00:40:21.505 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=369c27d9-9564-49c0-9fa9-094f27d0f702 00:40:21.505 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:21.505 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 369c27d9-9564-49c0-9fa9-094f27d0f702 00:40:21.766 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:22.027 [2024-12-09 05:33:35.815551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.027 05:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:22.287 05:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1866394 00:40:22.287 05:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:22.287 05:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:23.226 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 369c27d9-9564-49c0-9fa9-094f27d0f702 MY_SNAPSHOT 00:40:23.486 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4f2c241a-b4e0-492d-bfc6-37bd9701e4a2 00:40:23.486 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 369c27d9-9564-49c0-9fa9-094f27d0f702 30 00:40:23.745 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4f2c241a-b4e0-492d-bfc6-37bd9701e4a2 MY_CLONE 00:40:23.745 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6c7d7c50-9f7b-461d-b724-ed01b5a2a5c7 00:40:23.746 05:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6c7d7c50-9f7b-461d-b724-ed01b5a2a5c7 00:40:24.407 05:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1866394 00:40:32.595 Initializing NVMe Controllers 00:40:32.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:32.595 Controller IO queue size 128, less than required. 00:40:32.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:32.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:32.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:32.596 Initialization complete. Launching workers. 00:40:32.596 ======================================================== 00:40:32.596 Latency(us) 00:40:32.596 Device Information : IOPS MiB/s Average min max 00:40:32.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14780.20 57.74 8662.76 307.36 130178.59 00:40:32.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14488.30 56.59 8836.07 4229.10 147096.68 00:40:32.596 ======================================================== 00:40:32.596 Total : 29268.50 114.33 8748.55 307.36 147096.68 00:40:32.596 00:40:32.596 05:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:32.855 05:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 369c27d9-9564-49c0-9fa9-094f27d0f702 00:40:33.116 05:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db37ed39-d078-40a4-b309-7c61100313bc 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:33.116 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:33.116 rmmod nvme_tcp 00:40:33.116 rmmod nvme_fabrics 00:40:33.376 rmmod nvme_keyring 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1865778 ']' 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1865778 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1865778 ']' 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1865778 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865778 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865778' 00:40:33.376 killing process with pid 1865778 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1865778 00:40:33.376 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1865778 00:40:34.316 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:34.316 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:34.316 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:34.316 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.317 05:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:36.226 00:40:36.226 real 0m24.443s 00:40:36.226 user 0m57.485s 00:40:36.226 sys 0m10.589s 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:36.226 ************************************ 00:40:36.226 END TEST nvmf_lvol 00:40:36.226 ************************************ 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:36.226 ************************************ 00:40:36.226 START TEST nvmf_lvs_grow 00:40:36.226 ************************************ 00:40:36.226 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:36.226 * Looking for test storage... 00:40:36.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.486 --rc genhtml_branch_coverage=1 00:40:36.486 --rc genhtml_function_coverage=1 00:40:36.486 --rc genhtml_legend=1 00:40:36.486 --rc geninfo_all_blocks=1 00:40:36.486 --rc geninfo_unexecuted_blocks=1 00:40:36.486 00:40:36.486 ' 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.486 --rc genhtml_branch_coverage=1 00:40:36.486 --rc genhtml_function_coverage=1 00:40:36.486 --rc genhtml_legend=1 00:40:36.486 --rc geninfo_all_blocks=1 00:40:36.486 --rc geninfo_unexecuted_blocks=1 00:40:36.486 00:40:36.486 ' 00:40:36.486 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.486 --rc genhtml_branch_coverage=1 00:40:36.486 --rc genhtml_function_coverage=1 00:40:36.486 --rc genhtml_legend=1 00:40:36.486 --rc geninfo_all_blocks=1 00:40:36.486 --rc geninfo_unexecuted_blocks=1 00:40:36.487 00:40:36.487 ' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:36.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.487 --rc genhtml_branch_coverage=1 00:40:36.487 --rc genhtml_function_coverage=1 00:40:36.487 --rc genhtml_legend=1 00:40:36.487 --rc geninfo_all_blocks=1 00:40:36.487 --rc geninfo_unexecuted_blocks=1 00:40:36.487 00:40:36.487 ' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:36.487 05:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:44.616 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:44.617 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:44.617 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:44.617 Found net devices under 0000:31:00.0: cvl_0_0 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:44.617 Found net devices under 0000:31:00.1: cvl_0_1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:44.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:44.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:40:44.617 00:40:44.617 --- 10.0.0.2 ping statistics --- 00:40:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.617 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:44.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:44.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:40:44.617 00:40:44.617 --- 10.0.0.1 ping statistics --- 00:40:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.617 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:44.617 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1872736 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1872736 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1872736 ']' 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:44.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:44.618 05:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:44.618 [2024-12-09 05:33:57.928546] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:44.618 [2024-12-09 05:33:57.931243] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:44.618 [2024-12-09 05:33:57.931346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:44.618 [2024-12-09 05:33:58.096922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.618 [2024-12-09 05:33:58.195823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:44.618 [2024-12-09 05:33:58.195865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:44.618 [2024-12-09 05:33:58.195880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:44.618 [2024-12-09 05:33:58.195893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:44.618 [2024-12-09 05:33:58.195905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:44.618 [2024-12-09 05:33:58.197128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.618 [2024-12-09 05:33:58.440870] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:44.618 [2024-12-09 05:33:58.441175] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:44.878 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:44.878 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:44.878 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:44.879 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:44.879 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:44.879 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:44.879 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:45.139 [2024-12-09 05:33:58.898357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:45.139 ************************************ 00:40:45.139 START TEST lvs_grow_clean 00:40:45.139 ************************************ 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:45.139 05:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:45.400 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:45.400 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:45.400 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b425c493-e496-454f-bba9-49f1bf2da08b 00:40:45.400 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:45.400 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:45.661 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:45.661 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:45.661 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b425c493-e496-454f-bba9-49f1bf2da08b lvol 150 00:40:45.922 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a2102e43-8d63-4e36-9cac-7d59bbec2cce 00:40:45.922 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:45.922 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:45.922 [2024-12-09 05:33:59.902040] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:45.922 [2024-12-09 05:33:59.902268] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:45.922 true 00:40:46.183 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:46.183 05:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:46.183 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:46.183 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:46.444 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2102e43-8d63-4e36-9cac-7d59bbec2cce 00:40:46.703 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:46.703 [2024-12-09 05:34:00.634810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.703 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1873246 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1873246 /var/tmp/bdevperf.sock 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1873246 ']' 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:46.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:46.963 05:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:46.963 [2024-12-09 05:34:00.887395] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:46.963 [2024-12-09 05:34:00.887506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873246 ] 00:40:47.227 [2024-12-09 05:34:01.030269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.227 [2024-12-09 05:34:01.127523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:47.799 05:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:47.799 05:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:47.799 05:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:48.059 Nvme0n1 00:40:48.059 05:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:48.320 [ 00:40:48.320 { 00:40:48.321 "name": "Nvme0n1", 00:40:48.321 "aliases": [ 00:40:48.321 "a2102e43-8d63-4e36-9cac-7d59bbec2cce" 00:40:48.321 ], 00:40:48.321 "product_name": "NVMe disk", 00:40:48.321 "block_size": 4096, 00:40:48.321 "num_blocks": 38912, 00:40:48.321 "uuid": "a2102e43-8d63-4e36-9cac-7d59bbec2cce", 00:40:48.321 "numa_id": 0, 00:40:48.321 "assigned_rate_limits": { 00:40:48.321 "rw_ios_per_sec": 0, 00:40:48.321 "rw_mbytes_per_sec": 0, 00:40:48.321 "r_mbytes_per_sec": 0, 00:40:48.321 "w_mbytes_per_sec": 0 00:40:48.321 }, 00:40:48.321 "claimed": false, 00:40:48.321 "zoned": false, 00:40:48.321 "supported_io_types": { 00:40:48.321 "read": true, 00:40:48.321 "write": true, 00:40:48.321 "unmap": true, 00:40:48.321 "flush": true, 00:40:48.321 "reset": true, 00:40:48.321 "nvme_admin": true, 00:40:48.321 "nvme_io": true, 00:40:48.321 "nvme_io_md": false, 00:40:48.321 "write_zeroes": true, 00:40:48.321 "zcopy": false, 00:40:48.321 "get_zone_info": false, 00:40:48.321 "zone_management": false, 00:40:48.321 "zone_append": false, 00:40:48.321 "compare": true, 00:40:48.321 "compare_and_write": true, 00:40:48.321 "abort": true, 00:40:48.321 "seek_hole": false, 00:40:48.321 "seek_data": false, 00:40:48.321 "copy": true, 00:40:48.321 "nvme_iov_md": false 00:40:48.321 }, 00:40:48.321 "memory_domains": [ 00:40:48.321 { 00:40:48.321 "dma_device_id": "system", 00:40:48.321 "dma_device_type": 1 00:40:48.321 } 00:40:48.321 ], 00:40:48.321 "driver_specific": { 00:40:48.321 "nvme": [ 00:40:48.321 { 00:40:48.321 "trid": { 00:40:48.321 "trtype": "TCP", 00:40:48.321 "adrfam": "IPv4", 00:40:48.321 "traddr": "10.0.0.2", 00:40:48.321 "trsvcid": "4420", 00:40:48.321 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:48.321 }, 00:40:48.321 "ctrlr_data": { 00:40:48.321 "cntlid": 1, 00:40:48.321 "vendor_id": "0x8086", 00:40:48.321 "model_number": "SPDK bdev Controller", 00:40:48.321 "serial_number": "SPDK0", 00:40:48.321 "firmware_revision": "25.01", 00:40:48.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:48.321 "oacs": { 00:40:48.321 "security": 0, 00:40:48.321 "format": 0, 00:40:48.321 "firmware": 0, 00:40:48.321 "ns_manage": 0 00:40:48.321 }, 00:40:48.321 "multi_ctrlr": true, 00:40:48.321 "ana_reporting": false 00:40:48.321 }, 00:40:48.321 "vs": { 00:40:48.321 "nvme_version": "1.3" 00:40:48.321 }, 00:40:48.321 "ns_data": { 00:40:48.321 "id": 1, 00:40:48.321 "can_share": true 00:40:48.321 } 00:40:48.321 } 00:40:48.321 ], 00:40:48.321 "mp_policy": "active_passive" 00:40:48.321 } 00:40:48.321 } 00:40:48.321 ] 00:40:48.321 05:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:48.321 05:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1873578 00:40:48.321 05:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:48.321 Running I/O for 10 seconds... 00:40:49.263 Latency(us) 00:40:49.263 [2024-12-09T04:34:03.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:49.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.264 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:40:49.264 [2024-12-09T04:34:03.261Z] =================================================================================================================== 00:40:49.264 [2024-12-09T04:34:03.261Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:40:49.264 00:40:50.203 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:50.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.203 Nvme0n1 : 2.00 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:40:50.203 [2024-12-09T04:34:04.200Z] =================================================================================================================== 00:40:50.203 [2024-12-09T04:34:04.200Z] Total : 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:40:50.203 00:40:50.463 true 00:40:50.463 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:50.463 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:50.723 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:50.723 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:50.723 05:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1873578 00:40:51.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.297 Nvme0n1 : 3.00 15790.33 61.68 0.00 0.00 0.00 0.00 0.00 00:40:51.297 [2024-12-09T04:34:05.294Z] =================================================================================================================== 00:40:51.297 [2024-12-09T04:34:05.294Z] Total : 15790.33 61.68 0.00 0.00 0.00 0.00 0.00 00:40:51.297 00:40:52.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.238 Nvme0n1 : 4.00 16328.00 63.78 0.00 0.00 0.00 0.00 0.00 00:40:52.238 [2024-12-09T04:34:06.235Z] =================================================================================================================== 00:40:52.238 [2024-12-09T04:34:06.235Z] Total : 16328.00 63.78 0.00 0.00 0.00 0.00 0.00 00:40:52.238 00:40:53.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.622 Nvme0n1 : 5.00 17659.80 68.98 0.00 0.00 0.00 0.00 0.00 00:40:53.622 [2024-12-09T04:34:07.619Z] =================================================================================================================== 00:40:53.622 [2024-12-09T04:34:07.619Z] Total : 17659.80 68.98 0.00 0.00 0.00 0.00 0.00 00:40:53.622 00:40:54.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.563 Nvme0n1 : 6.00 18547.67 72.45 0.00 0.00 0.00 0.00 0.00 00:40:54.563 [2024-12-09T04:34:08.560Z] =================================================================================================================== 00:40:54.563 [2024-12-09T04:34:08.560Z] Total : 18547.67 72.45 0.00 0.00 0.00 0.00 0.00 00:40:54.563 00:40:55.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.506 Nvme0n1 : 7.00 19182.00 74.93 0.00 0.00 0.00 0.00 0.00 00:40:55.506 [2024-12-09T04:34:09.503Z] =================================================================================================================== 00:40:55.506 [2024-12-09T04:34:09.503Z] Total : 19182.00 74.93 0.00 0.00 0.00 0.00 0.00 00:40:55.506 00:40:56.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.447 Nvme0n1 : 8.00 19657.62 76.79 0.00 0.00 0.00 0.00 0.00 00:40:56.447 [2024-12-09T04:34:10.444Z] =================================================================================================================== 00:40:56.447 [2024-12-09T04:34:10.444Z] Total : 19657.62 76.79 0.00 0.00 0.00 0.00 0.00 00:40:56.447 00:40:57.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.389 Nvme0n1 : 9.00 20027.56 78.23 0.00 0.00 0.00 0.00 0.00 00:40:57.389 [2024-12-09T04:34:11.386Z] =================================================================================================================== 00:40:57.389 [2024-12-09T04:34:11.386Z] Total : 20027.56 78.23 0.00 0.00 0.00 0.00 0.00 00:40:57.389 00:40:58.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.329 Nvme0n1 : 10.00 20336.20 79.44 0.00 0.00 0.00 0.00 0.00 00:40:58.329 [2024-12-09T04:34:12.326Z] =================================================================================================================== 00:40:58.329 [2024-12-09T04:34:12.326Z] Total : 20336.20 79.44 0.00 0.00 0.00 0.00 0.00 00:40:58.329 00:40:58.329 00:40:58.329 Latency(us) 00:40:58.329 [2024-12-09T04:34:12.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:58.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.329 Nvme0n1 : 10.01 20336.61 79.44 0.00 0.00 6290.85 3249.49 33423.36 00:40:58.329 [2024-12-09T04:34:12.326Z] =================================================================================================================== 00:40:58.329 [2024-12-09T04:34:12.326Z] Total : 20336.61 79.44 0.00 0.00 6290.85 3249.49 33423.36 00:40:58.329 { 00:40:58.329 "results": [ 00:40:58.329 { 00:40:58.329 "job": "Nvme0n1", 00:40:58.329 "core_mask": "0x2", 00:40:58.329 "workload": "randwrite", 00:40:58.329 "status": "finished", 00:40:58.329 "queue_depth": 128, 00:40:58.329 "io_size": 4096, 00:40:58.329 "runtime": 10.006092, 00:40:58.329 "iops": 20336.610936617413, 00:40:58.329 "mibps": 79.43988647116177, 00:40:58.329 "io_failed": 0, 00:40:58.329 "io_timeout": 0, 00:40:58.329 "avg_latency_us": 6290.84861487051, 00:40:58.329 "min_latency_us": 3249.4933333333333, 00:40:58.329 "max_latency_us": 33423.36 00:40:58.329 } 00:40:58.329 ], 00:40:58.329 "core_count": 1 00:40:58.329 } 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1873246 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1873246 ']' 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1873246 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873246 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873246' 00:40:58.329 killing process with pid 1873246 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1873246 00:40:58.329 Received shutdown signal, test time was about 10.000000 seconds 00:40:58.329 00:40:58.329 Latency(us) 00:40:58.329 [2024-12-09T04:34:12.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:58.329 [2024-12-09T04:34:12.326Z] =================================================================================================================== 00:40:58.329 [2024-12-09T04:34:12.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:58.329 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1873246 00:40:58.898 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:59.158 05:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:59.158 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:59.158 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:59.418 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:59.418 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:59.418 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:59.678 [2024-12-09 05:34:13.430203] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:40:59.678 request: 00:40:59.678 { 00:40:59.678 "uuid": "b425c493-e496-454f-bba9-49f1bf2da08b", 00:40:59.678 "method": "bdev_lvol_get_lvstores", 00:40:59.678 "req_id": 1 00:40:59.678 } 00:40:59.678 Got JSON-RPC error response 00:40:59.678 response: 00:40:59.678 { 00:40:59.678 "code": -19, 00:40:59.678 "message": "No such device" 00:40:59.678 } 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:59.678 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:59.937 aio_bdev 00:40:59.937 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a2102e43-8d63-4e36-9cac-7d59bbec2cce 00:40:59.937 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a2102e43-8d63-4e36-9cac-7d59bbec2cce 00:40:59.937 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:59.937 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:59.938 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:59.938 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:59.938 05:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:00.198 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2102e43-8d63-4e36-9cac-7d59bbec2cce -t 2000 00:41:00.198 [ 00:41:00.198 { 00:41:00.198 "name": "a2102e43-8d63-4e36-9cac-7d59bbec2cce", 00:41:00.198 "aliases": [ 00:41:00.198 "lvs/lvol" 00:41:00.198 ], 00:41:00.198 "product_name": "Logical Volume", 00:41:00.198 "block_size": 4096, 00:41:00.198 "num_blocks": 38912, 00:41:00.198 "uuid": "a2102e43-8d63-4e36-9cac-7d59bbec2cce", 00:41:00.198 "assigned_rate_limits": { 00:41:00.198 "rw_ios_per_sec": 0, 00:41:00.198 "rw_mbytes_per_sec": 0, 00:41:00.198 "r_mbytes_per_sec": 0, 00:41:00.198 "w_mbytes_per_sec": 0 00:41:00.198 }, 00:41:00.198 "claimed": false, 00:41:00.198 "zoned": false, 00:41:00.198 "supported_io_types": { 00:41:00.198 "read": true, 00:41:00.198 "write": true, 00:41:00.198 "unmap": true, 00:41:00.198 "flush": false, 00:41:00.198 "reset": true, 00:41:00.198 "nvme_admin": false, 00:41:00.198 "nvme_io": false, 00:41:00.198 "nvme_io_md": false, 00:41:00.198 "write_zeroes": true, 00:41:00.198 "zcopy": false, 00:41:00.198 "get_zone_info": false, 00:41:00.198 "zone_management": false, 00:41:00.198 "zone_append": false, 00:41:00.198 "compare": false, 00:41:00.198 "compare_and_write": false, 00:41:00.198 "abort": false, 00:41:00.198 "seek_hole": true, 00:41:00.198 "seek_data": true, 00:41:00.198 "copy": false, 00:41:00.198 "nvme_iov_md": false 00:41:00.198 }, 00:41:00.198 "driver_specific": { 00:41:00.198 "lvol": { 00:41:00.198 "lvol_store_uuid": "b425c493-e496-454f-bba9-49f1bf2da08b", 00:41:00.198 "base_bdev": "aio_bdev", 00:41:00.198 "thin_provision": false, 00:41:00.198 "num_allocated_clusters": 38, 00:41:00.198 "snapshot": false, 00:41:00.198 "clone": false, 00:41:00.198 "esnap_clone": false 00:41:00.198 } 00:41:00.198 } 00:41:00.198 } 00:41:00.198 ] 00:41:00.198 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:41:00.199 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:41:00.199 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:00.459 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:00.459 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b425c493-e496-454f-bba9-49f1bf2da08b 00:41:00.459 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:00.719 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:00.719 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2102e43-8d63-4e36-9cac-7d59bbec2cce 00:41:00.719 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b425c493-e496-454f-bba9-49f1bf2da08b 00:41:00.979 05:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:01.240 00:41:01.240 real 0m16.103s 00:41:01.240 user 0m15.628s 00:41:01.240 sys 0m1.457s 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:01.240 ************************************ 00:41:01.240 END TEST lvs_grow_clean 00:41:01.240 ************************************ 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:01.240 ************************************ 00:41:01.240 START TEST lvs_grow_dirty 00:41:01.240 ************************************ 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:01.240 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:01.500 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:01.500 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5335d71d-0d6c-4937-a330-13002c3d934d 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:01.760 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5335d71d-0d6c-4937-a330-13002c3d934d lvol 150 00:41:02.020 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a730668a-60e2-430f-a69b-89038258262c 00:41:02.020 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:02.020 05:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:02.281 [2024-12-09 05:34:16.046010] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:02.281 [2024-12-09 05:34:16.046236] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:02.281 true 00:41:02.281 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:02.281 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:02.281 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:02.281 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:02.541 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a730668a-60e2-430f-a69b-89038258262c 00:41:02.809 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.809 [2024-12-09 05:34:16.690586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.809 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1876324 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1876324 /var/tmp/bdevperf.sock 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1876324 ']' 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:03.069 05:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:03.069 [2024-12-09 05:34:16.951854] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:03.070 [2024-12-09 05:34:16.951967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876324 ] 00:41:03.332 [2024-12-09 05:34:17.083080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.332 [2024-12-09 05:34:17.162374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.904 05:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:03.904 05:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:03.904 05:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:04.164 Nvme0n1 00:41:04.164 05:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:04.427 [ 00:41:04.427 { 00:41:04.427 "name": "Nvme0n1", 00:41:04.427 "aliases": [ 00:41:04.427 "a730668a-60e2-430f-a69b-89038258262c" 00:41:04.427 ], 00:41:04.427 "product_name": "NVMe disk", 00:41:04.427 "block_size": 4096, 00:41:04.427 "num_blocks": 38912, 00:41:04.427 "uuid": "a730668a-60e2-430f-a69b-89038258262c", 00:41:04.427 "numa_id": 0, 00:41:04.427 "assigned_rate_limits": { 00:41:04.427 "rw_ios_per_sec": 0, 00:41:04.427 "rw_mbytes_per_sec": 0, 00:41:04.427 "r_mbytes_per_sec": 0, 00:41:04.427 "w_mbytes_per_sec": 0 00:41:04.427 }, 00:41:04.427 "claimed": false, 00:41:04.427 "zoned": false, 00:41:04.427 "supported_io_types": { 00:41:04.427 "read": true, 00:41:04.427 "write": true, 00:41:04.427 "unmap": true, 00:41:04.427 "flush": true, 00:41:04.427 "reset": true, 00:41:04.427 "nvme_admin": true, 00:41:04.427 "nvme_io": true, 00:41:04.427 "nvme_io_md": false, 00:41:04.427 "write_zeroes": true, 00:41:04.427 "zcopy": false, 00:41:04.427 "get_zone_info": false, 00:41:04.427 "zone_management": false, 00:41:04.427 "zone_append": false, 00:41:04.427 "compare": true, 00:41:04.427 "compare_and_write": true, 00:41:04.427 "abort": true, 00:41:04.427 "seek_hole": false, 00:41:04.427 "seek_data": false, 00:41:04.427 "copy": true, 00:41:04.427 "nvme_iov_md": false 00:41:04.427 }, 00:41:04.427 "memory_domains": [ 00:41:04.427 { 00:41:04.427 "dma_device_id": "system", 00:41:04.427 "dma_device_type": 1 00:41:04.427 } 00:41:04.427 ], 00:41:04.427 "driver_specific": { 00:41:04.427 "nvme": [ 00:41:04.427 { 00:41:04.427 "trid": { 00:41:04.427 "trtype": "TCP", 00:41:04.427 "adrfam": "IPv4", 00:41:04.427 "traddr": "10.0.0.2", 00:41:04.427 "trsvcid": "4420", 00:41:04.427 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:04.427 }, 00:41:04.427 "ctrlr_data": { 00:41:04.427 "cntlid": 1, 00:41:04.427 "vendor_id": "0x8086", 00:41:04.427 "model_number": "SPDK bdev Controller", 00:41:04.427 "serial_number": "SPDK0", 00:41:04.427 "firmware_revision": "25.01", 00:41:04.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:04.427 "oacs": { 00:41:04.427 "security": 0, 00:41:04.427 "format": 0, 00:41:04.427 "firmware": 0, 00:41:04.427 "ns_manage": 0 00:41:04.427 }, 00:41:04.427 "multi_ctrlr": true, 00:41:04.427 "ana_reporting": false 00:41:04.427 }, 00:41:04.427 "vs": { 00:41:04.427 "nvme_version": "1.3" 00:41:04.427 }, 00:41:04.427 "ns_data": { 00:41:04.427 "id": 1, 00:41:04.427 "can_share": true 00:41:04.427 } 00:41:04.427 } 00:41:04.427 ], 00:41:04.427 "mp_policy": "active_passive" 00:41:04.427 } 00:41:04.427 } 00:41:04.427 ] 00:41:04.427 05:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:04.427 05:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1876652 00:41:04.427 05:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:04.427 Running I/O for 10 seconds... 00:41:05.368 Latency(us) 00:41:05.368 [2024-12-09T04:34:19.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:05.368 Nvme0n1 : 1.00 15357.00 59.99 0.00 0.00 0.00 0.00 0.00 00:41:05.368 [2024-12-09T04:34:19.365Z] =================================================================================================================== 00:41:05.368 [2024-12-09T04:34:19.365Z] Total : 15357.00 59.99 0.00 0.00 0.00 0.00 0.00 00:41:05.368 00:41:06.310 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:06.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.571 Nvme0n1 : 2.00 15518.50 60.62 0.00 0.00 0.00 0.00 0.00 00:41:06.571 [2024-12-09T04:34:20.568Z] =================================================================================================================== 00:41:06.571 [2024-12-09T04:34:20.568Z] Total : 15518.50 60.62 0.00 0.00 0.00 0.00 0.00 00:41:06.571 00:41:06.571 true 00:41:06.571 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:06.571 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:06.831 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:06.831 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:06.831 05:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1876652 00:41:07.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.399 Nvme0n1 : 3.00 15599.00 60.93 0.00 0.00 0.00 0.00 0.00 00:41:07.399 [2024-12-09T04:34:21.396Z] =================================================================================================================== 00:41:07.399 [2024-12-09T04:34:21.396Z] Total : 15599.00 60.93 0.00 0.00 0.00 0.00 0.00 00:41:07.399 00:41:08.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.337 Nvme0n1 : 4.00 15651.25 61.14 0.00 0.00 0.00 0.00 0.00 00:41:08.337 [2024-12-09T04:34:22.334Z] =================================================================================================================== 00:41:08.337 [2024-12-09T04:34:22.334Z] Total : 15651.25 61.14 0.00 0.00 0.00 0.00 0.00 00:41:08.337 00:41:09.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.715 Nvme0n1 : 5.00 15692.20 61.30 0.00 0.00 0.00 0.00 0.00 00:41:09.715 [2024-12-09T04:34:23.712Z] =================================================================================================================== 00:41:09.715 [2024-12-09T04:34:23.712Z] Total : 15692.20 61.30 0.00 0.00 0.00 0.00 0.00 00:41:09.715 00:41:10.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.652 Nvme0n1 : 6.00 16772.83 65.52 0.00 0.00 0.00 0.00 0.00 00:41:10.652 [2024-12-09T04:34:24.649Z] =================================================================================================================== 00:41:10.652 [2024-12-09T04:34:24.649Z] Total : 16772.83 65.52 0.00 0.00 0.00 0.00 0.00 00:41:10.652 00:41:11.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.592 Nvme0n1 : 7.00 17556.14 68.58 0.00 0.00 0.00 0.00 0.00 00:41:11.592 [2024-12-09T04:34:25.589Z] =================================================================================================================== 00:41:11.592 [2024-12-09T04:34:25.589Z] Total : 17556.14 68.58 0.00 0.00 0.00 0.00 0.00 00:41:11.592 00:41:12.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.531 Nvme0n1 : 8.00 18149.62 70.90 0.00 0.00 0.00 0.00 0.00 00:41:12.531 [2024-12-09T04:34:26.528Z] =================================================================================================================== 00:41:12.531 [2024-12-09T04:34:26.528Z] Total : 18149.62 70.90 0.00 0.00 0.00 0.00 0.00 00:41:12.531 00:41:13.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:13.470 Nvme0n1 : 9.00 18611.22 72.70 0.00 0.00 0.00 0.00 0.00 00:41:13.470 [2024-12-09T04:34:27.467Z] =================================================================================================================== 00:41:13.470 [2024-12-09T04:34:27.467Z] Total : 18611.22 72.70 0.00 0.00 0.00 0.00 0.00 00:41:13.470 00:41:14.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.411 Nvme0n1 : 10.00 18985.30 74.16 0.00 0.00 0.00 0.00 0.00 00:41:14.411 [2024-12-09T04:34:28.408Z] =================================================================================================================== 00:41:14.411 [2024-12-09T04:34:28.408Z] Total : 18985.30 74.16 0.00 0.00 0.00 0.00 0.00 00:41:14.411 00:41:14.411 00:41:14.412 Latency(us) 00:41:14.412 [2024-12-09T04:34:28.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.412 Nvme0n1 : 10.01 18987.40 74.17 0.00 0.00 6736.71 4614.83 24357.55 00:41:14.412 [2024-12-09T04:34:28.409Z] =================================================================================================================== 00:41:14.412 [2024-12-09T04:34:28.409Z] Total : 18987.40 74.17 0.00 0.00 6736.71 4614.83 24357.55 00:41:14.412 { 00:41:14.412 "results": [ 00:41:14.412 { 00:41:14.412 "job": "Nvme0n1", 00:41:14.412 "core_mask": "0x2", 00:41:14.412 "workload": "randwrite", 00:41:14.412 "status": "finished", 00:41:14.412 "queue_depth": 128, 00:41:14.412 "io_size": 4096, 00:41:14.412 "runtime": 10.007319, 00:41:14.412 "iops": 18987.403119656723, 00:41:14.412 "mibps": 74.16954343615907, 00:41:14.412 "io_failed": 0, 00:41:14.412 "io_timeout": 0, 00:41:14.412 "avg_latency_us": 6736.706481346013, 00:41:14.412 "min_latency_us": 4614.826666666667, 00:41:14.412 "max_latency_us": 24357.546666666665 00:41:14.412 } 00:41:14.412 ], 00:41:14.412 "core_count": 1 00:41:14.412 } 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1876324 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1876324 ']' 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1876324 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:14.412 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876324 00:41:14.673 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:14.673 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:14.673 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876324' 00:41:14.673 killing process with pid 1876324 00:41:14.673 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1876324 00:41:14.673 Received shutdown signal, test time was about 10.000000 seconds 00:41:14.673 00:41:14.673 Latency(us) 00:41:14.673 [2024-12-09T04:34:28.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.673 [2024-12-09T04:34:28.670Z] =================================================================================================================== 00:41:14.673 [2024-12-09T04:34:28.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:14.673 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1876324 00:41:14.933 05:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:15.195 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1872736 00:41:15.456 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1872736 00:41:15.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1872736 Killed "${NVMF_APP[@]}" "$@" 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1878680 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1878680 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1878680 ']' 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:15.717 05:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:15.717 [2024-12-09 05:34:29.537192] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:15.717 [2024-12-09 05:34:29.538981] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:15.717 [2024-12-09 05:34:29.539054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.717 [2024-12-09 05:34:29.652903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.978 [2024-12-09 05:34:29.727052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.978 [2024-12-09 05:34:29.727087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.978 [2024-12-09 05:34:29.727097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.978 [2024-12-09 05:34:29.727106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.978 [2024-12-09 05:34:29.727116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.978 [2024-12-09 05:34:29.728030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.978 [2024-12-09 05:34:29.909464] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:15.978 [2024-12-09 05:34:29.909680] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.551 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:16.551 [2024-12-09 05:34:30.508258] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:16.551 [2024-12-09 05:34:30.508707] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:16.551 [2024-12-09 05:34:30.508863] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a730668a-60e2-430f-a69b-89038258262c 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a730668a-60e2-430f-a69b-89038258262c 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:16.847 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a730668a-60e2-430f-a69b-89038258262c -t 2000 00:41:17.136 [ 00:41:17.136 { 00:41:17.136 "name": "a730668a-60e2-430f-a69b-89038258262c", 00:41:17.136 "aliases": [ 00:41:17.136 "lvs/lvol" 00:41:17.136 ], 00:41:17.136 "product_name": "Logical Volume", 00:41:17.136 "block_size": 4096, 00:41:17.136 "num_blocks": 38912, 00:41:17.136 "uuid": "a730668a-60e2-430f-a69b-89038258262c", 00:41:17.136 "assigned_rate_limits": { 00:41:17.136 "rw_ios_per_sec": 0, 00:41:17.136 "rw_mbytes_per_sec": 0, 00:41:17.136 "r_mbytes_per_sec": 0, 00:41:17.136 "w_mbytes_per_sec": 0 00:41:17.136 }, 00:41:17.136 "claimed": false, 00:41:17.136 "zoned": false, 00:41:17.136 "supported_io_types": { 00:41:17.136 "read": true, 00:41:17.136 "write": true, 00:41:17.136 "unmap": true, 00:41:17.136 "flush": false, 00:41:17.136 "reset": true, 00:41:17.136 "nvme_admin": false, 00:41:17.136 "nvme_io": false, 00:41:17.136 "nvme_io_md": false, 00:41:17.136 "write_zeroes": true, 00:41:17.136 "zcopy": false, 00:41:17.136 "get_zone_info": false, 00:41:17.136 "zone_management": false, 00:41:17.136 "zone_append": false, 00:41:17.136 "compare": false, 00:41:17.136 "compare_and_write": false, 00:41:17.136 "abort": false, 00:41:17.136 "seek_hole": true, 00:41:17.136 "seek_data": true, 00:41:17.136 "copy": false, 00:41:17.136 "nvme_iov_md": false 00:41:17.136 }, 00:41:17.136 "driver_specific": { 00:41:17.136 "lvol": { 00:41:17.136 "lvol_store_uuid": "5335d71d-0d6c-4937-a330-13002c3d934d", 00:41:17.136 "base_bdev": "aio_bdev", 00:41:17.136 "thin_provision": false, 00:41:17.136 "num_allocated_clusters": 38, 00:41:17.136 "snapshot": false, 00:41:17.136 "clone": false, 00:41:17.136 "esnap_clone": false 00:41:17.136 } 00:41:17.136 } 00:41:17.136 } 00:41:17.136 ] 00:41:17.136 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:17.136 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:17.136 05:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:17.136 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:17.136 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:17.136 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:17.425 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:17.425 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:17.425 [2024-12-09 05:34:31.380894] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:17.735 request: 00:41:17.735 { 00:41:17.735 "uuid": "5335d71d-0d6c-4937-a330-13002c3d934d", 00:41:17.735 "method": "bdev_lvol_get_lvstores", 00:41:17.735 "req_id": 1 00:41:17.735 } 00:41:17.735 Got JSON-RPC error response 00:41:17.735 response: 00:41:17.735 { 00:41:17.735 "code": -19, 00:41:17.735 "message": "No such device" 00:41:17.735 } 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:17.735 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:17.995 aio_bdev 00:41:17.995 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a730668a-60e2-430f-a69b-89038258262c 00:41:17.995 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a730668a-60e2-430f-a69b-89038258262c 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:17.996 05:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a730668a-60e2-430f-a69b-89038258262c -t 2000 00:41:18.256 [ 00:41:18.256 { 00:41:18.256 "name": "a730668a-60e2-430f-a69b-89038258262c", 00:41:18.256 "aliases": [ 00:41:18.256 "lvs/lvol" 00:41:18.256 ], 00:41:18.256 "product_name": "Logical Volume", 00:41:18.256 "block_size": 4096, 00:41:18.256 "num_blocks": 38912, 00:41:18.256 "uuid": "a730668a-60e2-430f-a69b-89038258262c", 00:41:18.256 "assigned_rate_limits": { 00:41:18.256 "rw_ios_per_sec": 0, 00:41:18.256 "rw_mbytes_per_sec": 0, 00:41:18.256 "r_mbytes_per_sec": 0, 00:41:18.256 "w_mbytes_per_sec": 0 00:41:18.256 }, 00:41:18.256 "claimed": false, 00:41:18.256 "zoned": false, 00:41:18.256 "supported_io_types": { 00:41:18.256 "read": true, 00:41:18.256 "write": true, 00:41:18.256 "unmap": true, 00:41:18.256 "flush": false, 00:41:18.256 "reset": true, 00:41:18.256 "nvme_admin": false, 00:41:18.256 "nvme_io": false, 00:41:18.256 "nvme_io_md": false, 00:41:18.256 "write_zeroes": true, 00:41:18.256 "zcopy": false, 00:41:18.256 "get_zone_info": false, 00:41:18.256 "zone_management": false, 00:41:18.256 "zone_append": false, 00:41:18.256 "compare": false, 00:41:18.256 "compare_and_write": false, 00:41:18.256 "abort": false, 00:41:18.256 "seek_hole": true, 00:41:18.256 "seek_data": true, 00:41:18.256 "copy": false, 00:41:18.256 "nvme_iov_md": false 00:41:18.256 }, 00:41:18.256 "driver_specific": { 00:41:18.256 "lvol": { 00:41:18.256 "lvol_store_uuid": "5335d71d-0d6c-4937-a330-13002c3d934d", 00:41:18.256 "base_bdev": "aio_bdev", 00:41:18.256 "thin_provision": false, 00:41:18.256 "num_allocated_clusters": 38, 00:41:18.256 "snapshot": false, 00:41:18.256 "clone": false, 00:41:18.256 "esnap_clone": false 00:41:18.256 } 00:41:18.256 } 00:41:18.256 } 00:41:18.256 ] 00:41:18.256 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:18.256 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:18.256 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:18.517 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:18.517 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:18.517 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:18.517 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:18.517 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a730668a-60e2-430f-a69b-89038258262c 00:41:18.777 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5335d71d-0d6c-4937-a330-13002c3d934d 00:41:19.037 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:19.037 05:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:19.037 00:41:19.037 real 0m17.878s 00:41:19.037 user 0m35.927s 00:41:19.037 sys 0m3.305s 00:41:19.037 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.037 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:19.037 ************************************ 00:41:19.037 END TEST lvs_grow_dirty 00:41:19.037 ************************************ 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:19.297 nvmf_trace.0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:19.297 rmmod nvme_tcp 00:41:19.297 rmmod nvme_fabrics 00:41:19.297 rmmod nvme_keyring 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1878680 ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1878680 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1878680 ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1878680 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1878680 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1878680' 00:41:19.297 killing process with pid 1878680 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1878680 00:41:19.297 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1878680 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:19.867 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:20.126 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:20.126 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:20.126 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:20.126 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.127 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.127 05:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:22.045 00:41:22.045 real 0m45.820s 00:41:22.045 user 0m55.188s 00:41:22.045 sys 0m10.918s 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:22.045 ************************************ 00:41:22.045 END TEST nvmf_lvs_grow 00:41:22.045 ************************************ 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.045 05:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:22.045 ************************************ 00:41:22.045 START TEST nvmf_bdev_io_wait 00:41:22.045 ************************************ 00:41:22.045 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:22.305 * Looking for test storage... 00:41:22.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:22.305 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:22.305 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:22.305 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:22.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.306 --rc genhtml_branch_coverage=1 00:41:22.306 --rc genhtml_function_coverage=1 00:41:22.306 --rc genhtml_legend=1 00:41:22.306 --rc geninfo_all_blocks=1 00:41:22.306 --rc geninfo_unexecuted_blocks=1 00:41:22.306 00:41:22.306 ' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:22.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.306 --rc genhtml_branch_coverage=1 00:41:22.306 --rc genhtml_function_coverage=1 00:41:22.306 --rc genhtml_legend=1 00:41:22.306 --rc geninfo_all_blocks=1 00:41:22.306 --rc geninfo_unexecuted_blocks=1 00:41:22.306 00:41:22.306 ' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:22.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.306 --rc genhtml_branch_coverage=1 00:41:22.306 --rc genhtml_function_coverage=1 00:41:22.306 --rc genhtml_legend=1 00:41:22.306 --rc geninfo_all_blocks=1 00:41:22.306 --rc geninfo_unexecuted_blocks=1 00:41:22.306 00:41:22.306 ' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:22.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.306 --rc genhtml_branch_coverage=1 00:41:22.306 --rc genhtml_function_coverage=1 00:41:22.306 --rc genhtml_legend=1 00:41:22.306 --rc geninfo_all_blocks=1 00:41:22.306 --rc geninfo_unexecuted_blocks=1 00:41:22.306 00:41:22.306 ' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:22.306 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:22.307 05:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:30.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:30.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:30.436 Found net devices under 0000:31:00.0: cvl_0_0 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:30.436 Found net devices under 0000:31:00.1: cvl_0_1 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:30.436 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:30.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:30.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:41:30.437 00:41:30.437 --- 10.0.0.2 ping statistics --- 00:41:30.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.437 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:30.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:30.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:41:30.437 00:41:30.437 --- 10.0.0.1 ping statistics --- 00:41:30.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.437 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1883626 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1883626 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1883626 ']' 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.437 05:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.437 [2024-12-09 05:34:43.627310] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:30.437 [2024-12-09 05:34:43.629606] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:30.437 [2024-12-09 05:34:43.629691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:30.437 [2024-12-09 05:34:43.779076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:30.437 [2024-12-09 05:34:43.882900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:30.437 [2024-12-09 05:34:43.882942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:30.437 [2024-12-09 05:34:43.882956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:30.437 [2024-12-09 05:34:43.882965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:30.437 [2024-12-09 05:34:43.882976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:30.437 [2024-12-09 05:34:43.885185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.437 [2024-12-09 05:34:43.885305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:30.437 [2024-12-09 05:34:43.885434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.437 [2024-12-09 05:34:43.885460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:30.437 [2024-12-09 05:34:43.885910] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.437 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.698 [2024-12-09 05:34:44.607583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:30.698 [2024-12-09 05:34:44.608334] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:30.698 [2024-12-09 05:34:44.609116] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:30.698 [2024-12-09 05:34:44.609264] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.698 [2024-12-09 05:34:44.618532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.698 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.959 Malloc0 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.959 [2024-12-09 05:34:44.758443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1883812 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1883814 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.959 { 00:41:30.959 "params": { 00:41:30.959 "name": "Nvme$subsystem", 00:41:30.959 "trtype": "$TEST_TRANSPORT", 00:41:30.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.959 "adrfam": "ipv4", 00:41:30.959 "trsvcid": "$NVMF_PORT", 00:41:30.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.959 "hdgst": ${hdgst:-false}, 00:41:30.959 "ddgst": ${ddgst:-false} 00:41:30.959 }, 00:41:30.959 "method": "bdev_nvme_attach_controller" 00:41:30.959 } 00:41:30.959 EOF 00:41:30.959 )") 00:41:30.959 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1883816 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.960 { 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme$subsystem", 00:41:30.960 "trtype": "$TEST_TRANSPORT", 00:41:30.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "$NVMF_PORT", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.960 "hdgst": ${hdgst:-false}, 00:41:30.960 "ddgst": ${ddgst:-false} 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 } 00:41:30.960 EOF 00:41:30.960 )") 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1883819 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.960 { 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme$subsystem", 00:41:30.960 "trtype": "$TEST_TRANSPORT", 00:41:30.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "$NVMF_PORT", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.960 "hdgst": ${hdgst:-false}, 00:41:30.960 "ddgst": ${ddgst:-false} 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 } 00:41:30.960 EOF 00:41:30.960 )") 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:30.960 { 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme$subsystem", 00:41:30.960 "trtype": "$TEST_TRANSPORT", 00:41:30.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "$NVMF_PORT", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.960 "hdgst": ${hdgst:-false}, 00:41:30.960 "ddgst": ${ddgst:-false} 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 } 00:41:30.960 EOF 00:41:30.960 )") 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1883812 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme1", 00:41:30.960 "trtype": "tcp", 00:41:30.960 "traddr": "10.0.0.2", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "4420", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:30.960 "hdgst": false, 00:41:30.960 "ddgst": false 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 }' 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme1", 00:41:30.960 "trtype": "tcp", 00:41:30.960 "traddr": "10.0.0.2", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "4420", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:30.960 "hdgst": false, 00:41:30.960 "ddgst": false 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 }' 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme1", 00:41:30.960 "trtype": "tcp", 00:41:30.960 "traddr": "10.0.0.2", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "4420", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:30.960 "hdgst": false, 00:41:30.960 "ddgst": false 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 }' 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:30.960 05:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:30.960 "params": { 00:41:30.960 "name": "Nvme1", 00:41:30.960 "trtype": "tcp", 00:41:30.960 "traddr": "10.0.0.2", 00:41:30.960 "adrfam": "ipv4", 00:41:30.960 "trsvcid": "4420", 00:41:30.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:30.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:30.960 "hdgst": false, 00:41:30.960 "ddgst": false 00:41:30.960 }, 00:41:30.960 "method": "bdev_nvme_attach_controller" 00:41:30.960 }' 00:41:30.960 [2024-12-09 05:34:44.835261] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:30.960 [2024-12-09 05:34:44.835354] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:30.960 [2024-12-09 05:34:44.842618] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:30.960 [2024-12-09 05:34:44.842728] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:30.960 [2024-12-09 05:34:44.852290] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:30.960 [2024-12-09 05:34:44.852391] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:30.960 [2024-12-09 05:34:44.854787] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:30.960 [2024-12-09 05:34:44.854880] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:31.221 [2024-12-09 05:34:44.972848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.221 [2024-12-09 05:34:45.025957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.221 [2024-12-09 05:34:45.058754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:31.221 [2024-12-09 05:34:45.076792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.221 [2024-12-09 05:34:45.145580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:31.221 [2024-12-09 05:34:45.195615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:41:31.221 [2024-12-09 05:34:45.195799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.481 [2024-12-09 05:34:45.324999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:31.481 Running I/O for 1 seconds... 00:41:31.481 Running I/O for 1 seconds... 00:41:31.742 Running I/O for 1 seconds... 00:41:32.003 Running I/O for 1 seconds... 00:41:32.573 8180.00 IOPS, 31.95 MiB/s 00:41:32.573 Latency(us) 00:41:32.573 [2024-12-09T04:34:46.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:32.573 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:32.573 Nvme1n1 : 1.02 8168.17 31.91 0.00 0.00 15553.69 5789.01 34515.63 00:41:32.573 [2024-12-09T04:34:46.570Z] =================================================================================================================== 00:41:32.573 [2024-12-09T04:34:46.570Z] Total : 8168.17 31.91 0.00 0.00 15553.69 5789.01 34515.63 00:41:32.573 165816.00 IOPS, 647.72 MiB/s 00:41:32.573 Latency(us) 00:41:32.573 [2024-12-09T04:34:46.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:32.573 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:32.573 Nvme1n1 : 1.00 165469.07 646.36 0.00 0.00 769.31 332.80 2061.65 00:41:32.573 [2024-12-09T04:34:46.570Z] =================================================================================================================== 00:41:32.573 [2024-12-09T04:34:46.570Z] Total : 165469.07 646.36 0.00 0.00 769.31 332.80 2061.65 00:41:32.833 12708.00 IOPS, 49.64 MiB/s 00:41:32.833 Latency(us) 00:41:32.833 [2024-12-09T04:34:46.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:32.833 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:32.833 Nvme1n1 : 1.01 12778.88 49.92 0.00 0.00 9982.58 3686.40 19114.67 00:41:32.833 [2024-12-09T04:34:46.830Z] =================================================================================================================== 00:41:32.833 [2024-12-09T04:34:46.830Z] Total : 12778.88 49.92 0.00 0.00 9982.58 3686.40 19114.67 00:41:33.093 8994.00 IOPS, 35.13 MiB/s 00:41:33.093 Latency(us) 00:41:33.093 [2024-12-09T04:34:47.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.093 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:33.093 Nvme1n1 : 1.01 9096.75 35.53 0.00 0.00 14028.55 4232.53 41069.23 00:41:33.093 [2024-12-09T04:34:47.090Z] =================================================================================================================== 00:41:33.093 [2024-12-09T04:34:47.090Z] Total : 9096.75 35.53 0.00 0.00 14028.55 4232.53 41069.23 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1883814 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1883816 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1883819 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:33.353 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:33.353 rmmod nvme_tcp 00:41:33.353 rmmod nvme_fabrics 00:41:33.614 rmmod nvme_keyring 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1883626 ']' 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1883626 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1883626 ']' 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1883626 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883626 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883626' 00:41:33.614 killing process with pid 1883626 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1883626 00:41:33.614 05:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1883626 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.555 05:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.470 00:41:36.470 real 0m14.286s 00:41:36.470 user 0m21.419s 00:41:36.470 sys 0m7.817s 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:36.470 ************************************ 00:41:36.470 END TEST nvmf_bdev_io_wait 00:41:36.470 ************************************ 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:36.470 ************************************ 00:41:36.470 START TEST nvmf_queue_depth 00:41:36.470 ************************************ 00:41:36.470 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:36.732 * Looking for test storage... 00:41:36.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.732 --rc genhtml_branch_coverage=1 00:41:36.732 --rc genhtml_function_coverage=1 00:41:36.732 --rc genhtml_legend=1 00:41:36.732 --rc geninfo_all_blocks=1 00:41:36.732 --rc geninfo_unexecuted_blocks=1 00:41:36.732 00:41:36.732 ' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.732 --rc genhtml_branch_coverage=1 00:41:36.732 --rc genhtml_function_coverage=1 00:41:36.732 --rc genhtml_legend=1 00:41:36.732 --rc geninfo_all_blocks=1 00:41:36.732 --rc geninfo_unexecuted_blocks=1 00:41:36.732 00:41:36.732 ' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.732 --rc genhtml_branch_coverage=1 00:41:36.732 --rc genhtml_function_coverage=1 00:41:36.732 --rc genhtml_legend=1 00:41:36.732 --rc geninfo_all_blocks=1 00:41:36.732 --rc geninfo_unexecuted_blocks=1 00:41:36.732 00:41:36.732 ' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:36.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.732 --rc genhtml_branch_coverage=1 00:41:36.732 --rc genhtml_function_coverage=1 00:41:36.732 --rc genhtml_legend=1 00:41:36.732 --rc geninfo_all_blocks=1 00:41:36.732 --rc geninfo_unexecuted_blocks=1 00:41:36.732 00:41:36.732 ' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.732 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.733 05:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:44.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:44.862 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:44.863 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:44.863 Found net devices under 0000:31:00.0: cvl_0_0 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:44.863 Found net devices under 0000:31:00.1: cvl_0_1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:44.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:44.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:41:44.863 00:41:44.863 --- 10.0.0.2 ping statistics --- 00:41:44.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.863 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:44.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:44.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:41:44.863 00:41:44.863 --- 10.0.0.1 ping statistics --- 00:41:44.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.863 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:44.863 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1888550 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1888550 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1888550 ']' 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.864 05:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.864 [2024-12-09 05:34:57.942377] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:44.864 [2024-12-09 05:34:57.944728] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:44.864 [2024-12-09 05:34:57.944811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.864 [2024-12-09 05:34:58.098558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.864 [2024-12-09 05:34:58.203377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.864 [2024-12-09 05:34:58.203433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.864 [2024-12-09 05:34:58.203449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.864 [2024-12-09 05:34:58.203462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.864 [2024-12-09 05:34:58.203476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.864 [2024-12-09 05:34:58.204943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.864 [2024-12-09 05:34:58.487844] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:44.864 [2024-12-09 05:34:58.488192] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.864 [2024-12-09 05:34:58.766240] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.864 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.125 Malloc0 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.125 [2024-12-09 05:34:58.898065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1888888 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1888888 /var/tmp/bdevperf.sock 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1888888 ']' 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:45.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.125 05:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.125 [2024-12-09 05:34:58.992930] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:45.125 [2024-12-09 05:34:58.993049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888888 ] 00:41:45.385 [2024-12-09 05:34:59.150605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.385 [2024-12-09 05:34:59.272364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.958 NVMe0n1 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.958 05:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:45.958 Running I/O for 10 seconds... 00:41:48.285 8015.00 IOPS, 31.31 MiB/s [2024-12-09T04:35:03.220Z] 8063.00 IOPS, 31.50 MiB/s [2024-12-09T04:35:04.158Z] 8703.00 IOPS, 34.00 MiB/s [2024-12-09T04:35:05.095Z] 9479.00 IOPS, 37.03 MiB/s [2024-12-09T04:35:06.039Z] 10033.60 IOPS, 39.19 MiB/s [2024-12-09T04:35:06.980Z] 10387.00 IOPS, 40.57 MiB/s [2024-12-09T04:35:08.365Z] 10611.57 IOPS, 41.45 MiB/s [2024-12-09T04:35:09.307Z] 10821.38 IOPS, 42.27 MiB/s [2024-12-09T04:35:10.249Z] 10999.22 IOPS, 42.97 MiB/s [2024-12-09T04:35:10.249Z] 11129.40 IOPS, 43.47 MiB/s 00:41:56.252 Latency(us) 00:41:56.252 [2024-12-09T04:35:10.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.252 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:56.252 Verification LBA range: start 0x0 length 0x4000 00:41:56.253 NVMe0n1 : 10.06 11158.83 43.59 0.00 0.00 91379.79 24903.68 77332.48 00:41:56.253 [2024-12-09T04:35:10.250Z] =================================================================================================================== 00:41:56.253 [2024-12-09T04:35:10.250Z] Total : 11158.83 43.59 0.00 0.00 91379.79 24903.68 77332.48 00:41:56.253 { 00:41:56.253 "results": [ 00:41:56.253 { 00:41:56.253 "job": "NVMe0n1", 00:41:56.253 "core_mask": "0x1", 00:41:56.253 "workload": "verify", 00:41:56.253 "status": "finished", 00:41:56.253 "verify_range": { 00:41:56.253 "start": 0, 00:41:56.253 "length": 16384 00:41:56.253 }, 00:41:56.253 "queue_depth": 1024, 00:41:56.253 "io_size": 4096, 00:41:56.253 "runtime": 10.062617, 00:41:56.253 "iops": 11158.826774386822, 00:41:56.253 "mibps": 43.58916708744852, 00:41:56.253 "io_failed": 0, 00:41:56.253 "io_timeout": 0, 00:41:56.253 "avg_latency_us": 91379.79031208718, 00:41:56.253 "min_latency_us": 24903.68, 00:41:56.253 "max_latency_us": 77332.48 00:41:56.253 } 00:41:56.253 ], 00:41:56.253 "core_count": 1 00:41:56.253 } 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1888888 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1888888 ']' 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1888888 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1888888 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1888888' 00:41:56.253 killing process with pid 1888888 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1888888 00:41:56.253 Received shutdown signal, test time was about 10.000000 seconds 00:41:56.253 00:41:56.253 Latency(us) 00:41:56.253 [2024-12-09T04:35:10.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.253 [2024-12-09T04:35:10.250Z] =================================================================================================================== 00:41:56.253 [2024-12-09T04:35:10.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:56.253 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1888888 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:56.825 rmmod nvme_tcp 00:41:56.825 rmmod nvme_fabrics 00:41:56.825 rmmod nvme_keyring 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1888550 ']' 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1888550 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1888550 ']' 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1888550 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1888550 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:56.825 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:56.826 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1888550' 00:41:56.826 killing process with pid 1888550 00:41:56.826 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1888550 00:41:56.826 05:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1888550 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:57.397 05:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:59.939 00:41:59.939 real 0m23.020s 00:41:59.939 user 0m25.848s 00:41:59.939 sys 0m7.367s 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:59.939 ************************************ 00:41:59.939 END TEST nvmf_queue_depth 00:41:59.939 ************************************ 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:59.939 ************************************ 00:41:59.939 START TEST nvmf_target_multipath 00:41:59.939 ************************************ 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:59.939 * Looking for test storage... 00:41:59.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.939 --rc genhtml_branch_coverage=1 00:41:59.939 --rc genhtml_function_coverage=1 00:41:59.939 --rc genhtml_legend=1 00:41:59.939 --rc geninfo_all_blocks=1 00:41:59.939 --rc geninfo_unexecuted_blocks=1 00:41:59.939 00:41:59.939 ' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.939 --rc genhtml_branch_coverage=1 00:41:59.939 --rc genhtml_function_coverage=1 00:41:59.939 --rc genhtml_legend=1 00:41:59.939 --rc geninfo_all_blocks=1 00:41:59.939 --rc geninfo_unexecuted_blocks=1 00:41:59.939 00:41:59.939 ' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.939 --rc genhtml_branch_coverage=1 00:41:59.939 --rc genhtml_function_coverage=1 00:41:59.939 --rc genhtml_legend=1 00:41:59.939 --rc geninfo_all_blocks=1 00:41:59.939 --rc geninfo_unexecuted_blocks=1 00:41:59.939 00:41:59.939 ' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.939 --rc genhtml_branch_coverage=1 00:41:59.939 --rc genhtml_function_coverage=1 00:41:59.939 --rc genhtml_legend=1 00:41:59.939 --rc geninfo_all_blocks=1 00:41:59.939 --rc geninfo_unexecuted_blocks=1 00:41:59.939 00:41:59.939 ' 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:59.939 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:59.940 05:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.076 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:08.077 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:08.077 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:08.077 Found net devices under 0000:31:00.0: cvl_0_0 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:08.077 Found net devices under 0000:31:00.1: cvl_0_1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.077 05:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.077 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:42:08.077 00:42:08.077 --- 10.0.0.2 ping statistics --- 00:42:08.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.077 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:42:08.077 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:42:08.077 00:42:08.077 --- 10.0.0.1 ping statistics --- 00:42:08.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.077 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:42:08.077 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.077 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:08.078 only one NIC for nvmf test 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.078 rmmod nvme_tcp 00:42:08.078 rmmod nvme_fabrics 00:42:08.078 rmmod nvme_keyring 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.078 05:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:09.463 00:42:09.463 real 0m9.774s 00:42:09.463 user 0m2.120s 00:42:09.463 sys 0m5.600s 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:09.463 ************************************ 00:42:09.463 END TEST nvmf_target_multipath 00:42:09.463 ************************************ 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:09.463 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:09.463 ************************************ 00:42:09.463 START TEST nvmf_zcopy 00:42:09.463 ************************************ 00:42:09.464 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:09.464 * Looking for test storage... 00:42:09.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:09.464 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:09.464 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:09.464 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.726 --rc genhtml_branch_coverage=1 00:42:09.726 --rc genhtml_function_coverage=1 00:42:09.726 --rc genhtml_legend=1 00:42:09.726 --rc geninfo_all_blocks=1 00:42:09.726 --rc geninfo_unexecuted_blocks=1 00:42:09.726 00:42:09.726 ' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.726 --rc genhtml_branch_coverage=1 00:42:09.726 --rc genhtml_function_coverage=1 00:42:09.726 --rc genhtml_legend=1 00:42:09.726 --rc geninfo_all_blocks=1 00:42:09.726 --rc geninfo_unexecuted_blocks=1 00:42:09.726 00:42:09.726 ' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.726 --rc genhtml_branch_coverage=1 00:42:09.726 --rc genhtml_function_coverage=1 00:42:09.726 --rc genhtml_legend=1 00:42:09.726 --rc geninfo_all_blocks=1 00:42:09.726 --rc geninfo_unexecuted_blocks=1 00:42:09.726 00:42:09.726 ' 00:42:09.726 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.726 --rc genhtml_branch_coverage=1 00:42:09.726 --rc genhtml_function_coverage=1 00:42:09.726 --rc genhtml_legend=1 00:42:09.726 --rc geninfo_all_blocks=1 00:42:09.726 --rc geninfo_unexecuted_blocks=1 00:42:09.726 00:42:09.727 ' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:09.727 05:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:17.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.867 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:17.868 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:17.868 Found net devices under 0000:31:00.0: cvl_0_0 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:17.868 Found net devices under 0000:31:00.1: cvl_0_1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:42:17.868 00:42:17.868 --- 10.0.0.2 ping statistics --- 00:42:17.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.868 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:42:17.868 00:42:17.868 --- 10.0.0.1 ping statistics --- 00:42:17.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.868 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1899316 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1899316 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1899316 ']' 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.868 05:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.868 [2024-12-09 05:35:30.950869] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:17.868 [2024-12-09 05:35:30.953548] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:17.868 [2024-12-09 05:35:30.953652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.868 [2024-12-09 05:35:31.117161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.868 [2024-12-09 05:35:31.238038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.868 [2024-12-09 05:35:31.238103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.868 [2024-12-09 05:35:31.238200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.868 [2024-12-09 05:35:31.238216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.868 [2024-12-09 05:35:31.238230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.868 [2024-12-09 05:35:31.239716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.868 [2024-12-09 05:35:31.518735] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:17.869 [2024-12-09 05:35:31.519108] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.869 [2024-12-09 05:35:31.772988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.869 [2024-12-09 05:35:31.801292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.869 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.130 malloc0 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:18.130 { 00:42:18.130 "params": { 00:42:18.130 "name": "Nvme$subsystem", 00:42:18.130 "trtype": "$TEST_TRANSPORT", 00:42:18.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:18.130 "adrfam": "ipv4", 00:42:18.130 "trsvcid": "$NVMF_PORT", 00:42:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:18.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:18.130 "hdgst": ${hdgst:-false}, 00:42:18.130 "ddgst": ${ddgst:-false} 00:42:18.130 }, 00:42:18.130 "method": "bdev_nvme_attach_controller" 00:42:18.130 } 00:42:18.130 EOF 00:42:18.130 )") 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:18.130 05:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:18.130 "params": { 00:42:18.130 "name": "Nvme1", 00:42:18.130 "trtype": "tcp", 00:42:18.130 "traddr": "10.0.0.2", 00:42:18.130 "adrfam": "ipv4", 00:42:18.130 "trsvcid": "4420", 00:42:18.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:18.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:18.130 "hdgst": false, 00:42:18.130 "ddgst": false 00:42:18.130 }, 00:42:18.130 "method": "bdev_nvme_attach_controller" 00:42:18.130 }' 00:42:18.130 [2024-12-09 05:35:31.973713] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:18.130 [2024-12-09 05:35:31.973822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899646 ] 00:42:18.130 [2024-12-09 05:35:32.114507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.391 [2024-12-09 05:35:32.213692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.962 Running I/O for 10 seconds... 00:42:20.957 5759.00 IOPS, 44.99 MiB/s [2024-12-09T04:35:35.896Z] 5837.00 IOPS, 45.60 MiB/s [2024-12-09T04:35:36.837Z] 5936.00 IOPS, 46.38 MiB/s [2024-12-09T04:35:37.775Z] 6632.00 IOPS, 51.81 MiB/s [2024-12-09T04:35:38.714Z] 7051.80 IOPS, 55.09 MiB/s [2024-12-09T04:35:39.692Z] 7318.33 IOPS, 57.17 MiB/s [2024-12-09T04:35:41.075Z] 7514.86 IOPS, 58.71 MiB/s [2024-12-09T04:35:42.017Z] 7657.38 IOPS, 59.82 MiB/s [2024-12-09T04:35:42.961Z] 7770.78 IOPS, 60.71 MiB/s [2024-12-09T04:35:42.961Z] 7863.00 IOPS, 61.43 MiB/s 00:42:28.964 Latency(us) 00:42:28.964 [2024-12-09T04:35:42.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.964 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:28.964 Verification LBA range: start 0x0 length 0x1000 00:42:28.964 Nvme1n1 : 10.01 7865.20 61.45 0.00 0.00 16225.08 1522.35 32549.55 00:42:28.964 [2024-12-09T04:35:42.961Z] =================================================================================================================== 00:42:28.964 [2024-12-09T04:35:42.961Z] Total : 7865.20 61.45 0.00 0.00 16225.08 1522.35 32549.55 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1901665 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:29.225 { 00:42:29.225 "params": { 00:42:29.225 "name": "Nvme$subsystem", 00:42:29.225 "trtype": "$TEST_TRANSPORT", 00:42:29.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.225 "adrfam": "ipv4", 00:42:29.225 "trsvcid": "$NVMF_PORT", 00:42:29.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.225 "hdgst": ${hdgst:-false}, 00:42:29.225 "ddgst": ${ddgst:-false} 00:42:29.225 }, 00:42:29.225 "method": "bdev_nvme_attach_controller" 00:42:29.225 } 00:42:29.225 EOF 00:42:29.225 )") 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:29.225 [2024-12-09 05:35:43.148435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.148471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:29.225 05:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:29.225 "params": { 00:42:29.225 "name": "Nvme1", 00:42:29.225 "trtype": "tcp", 00:42:29.225 "traddr": "10.0.0.2", 00:42:29.225 "adrfam": "ipv4", 00:42:29.225 "trsvcid": "4420", 00:42:29.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:29.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:29.225 "hdgst": false, 00:42:29.225 "ddgst": false 00:42:29.225 }, 00:42:29.225 "method": "bdev_nvme_attach_controller" 00:42:29.225 }' 00:42:29.225 [2024-12-09 05:35:43.160402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.160421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 [2024-12-09 05:35:43.172381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.172397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 [2024-12-09 05:35:43.184398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.184415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 [2024-12-09 05:35:43.196383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.196398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 [2024-12-09 05:35:43.208374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.225 [2024-12-09 05:35:43.208388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.225 [2024-12-09 05:35:43.217710] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:29.225 [2024-12-09 05:35:43.217797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901665 ] 00:42:29.485 [2024-12-09 05:35:43.220398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.220415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.232381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.232396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.244375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.244389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.256390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.256406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.268373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.268388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.280383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.280402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.292385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.292400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.304374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.304389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.316382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.316398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.328381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.328396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.340376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.340391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.352383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.352398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.354431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.485 [2024-12-09 05:35:43.364389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.364406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.376387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.376403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.388383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.388399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.400373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.400388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.412383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.412399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.424384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.485 [2024-12-09 05:35:43.424399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.485 [2024-12-09 05:35:43.429705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.486 [2024-12-09 05:35:43.436372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.486 [2024-12-09 05:35:43.436386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.486 [2024-12-09 05:35:43.448389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.486 [2024-12-09 05:35:43.448405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.486 [2024-12-09 05:35:43.460376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.486 [2024-12-09 05:35:43.460390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.486 [2024-12-09 05:35:43.472391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.486 [2024-12-09 05:35:43.472407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.484389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.484406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.496375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.496393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.508397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.508412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.520382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.520398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.532371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.532386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.544382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.745 [2024-12-09 05:35:43.544397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.745 [2024-12-09 05:35:43.556382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.556397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.568383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.568398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.580384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.580400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.592371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.592386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.604385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.604400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.616384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.616400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.628381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.628402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.640395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.640412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.652374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.652391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.664385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.664400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.676381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.676397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.688373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.688388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.700381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.700397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.712384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.712402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.724375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.724399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.746 [2024-12-09 05:35:43.736386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.746 [2024-12-09 05:35:43.736401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.748423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.748440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.760384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.760399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.772381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.772396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.784386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.784402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.796383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.796399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.842330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.842349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.852377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.852393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 Running I/O for 5 seconds... 00:42:30.006 [2024-12-09 05:35:43.867971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.867993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.882351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.882371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.896823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.896842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.912772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.912791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.928538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.928558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.942497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.942517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.956691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.956710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.971881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.971902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.006 [2024-12-09 05:35:43.986648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.006 [2024-12-09 05:35:43.986669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.000875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.000895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.016161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.016181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.030112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.030132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.044631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.044651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.056785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.056803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.072569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.072590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.085585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.085604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.100261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.100280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.114208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.114228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.128673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.128692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.144224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.144243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.158145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.158165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.172470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.172489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.185936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.185955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.200694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.200714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.216385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.216405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.229694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.229715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.244114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.244134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.267 [2024-12-09 05:35:44.258179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.267 [2024-12-09 05:35:44.258198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.272620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.272640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.286082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.286101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.300209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.300229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.314050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.314069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.328254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.328272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.342014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.342032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.356772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.356790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.372624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.372644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.385717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.385736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.400340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.400359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.413190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.413209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.427730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.427751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.442181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.442201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.457195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.457214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.472374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.472393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.486443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.486461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.500761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.500778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.528 [2024-12-09 05:35:44.516473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.528 [2024-12-09 05:35:44.516492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.530087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.530106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.544734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.544752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.560424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.560443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.574036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.574055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.588797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.588820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.603861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.603881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.617928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.617948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.632249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.632268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.646030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.646049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.660773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.660791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.676376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.676395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.690414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.690434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.704928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.704947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.719805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.719830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.734242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.734260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.748585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.748604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.759368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.759388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.789 [2024-12-09 05:35:44.774029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.789 [2024-12-09 05:35:44.774048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.788201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.788220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.801627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.801646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.816227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.816247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.830704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.830724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.844803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.844827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.859667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.859685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 17036.00 IOPS, 133.09 MiB/s [2024-12-09T04:35:45.046Z] [2024-12-09 05:35:44.873656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.873674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.888538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.888556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.901704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.901723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.916530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.916551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.049 [2024-12-09 05:35:44.930276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.049 [2024-12-09 05:35:44.930295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:44.944710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:44.944729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:44.959997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:44.960016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:44.974199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:44.974218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:44.989186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:44.989205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:45.003799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:45.003824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:45.017035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:45.017054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.050 [2024-12-09 05:35:45.032283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.050 [2024-12-09 05:35:45.032304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.046220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.046240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.060909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.060928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.076670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.076689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.092483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.092507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.106484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.106503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.120925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.120944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.136667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.136686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.152181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.152199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.166077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.166096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.180577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.180595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.194080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.194098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.208730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.208749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.224445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.224465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.237975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.237993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.252450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.309 [2024-12-09 05:35:45.252468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.309 [2024-12-09 05:35:45.264873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.310 [2024-12-09 05:35:45.264891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.310 [2024-12-09 05:35:45.279910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.310 [2024-12-09 05:35:45.279928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.310 [2024-12-09 05:35:45.294262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.310 [2024-12-09 05:35:45.294281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.307930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.307949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.322129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.322149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.336368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.336387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.349916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.349935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.364172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.364195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.377101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.377121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.391892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.391911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.569 [2024-12-09 05:35:45.406308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.569 [2024-12-09 05:35:45.406327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.421597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.421616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.436296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.436316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.449806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.449832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.464197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.464216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.477788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.477807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.492479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.492498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.505316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.505334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.520102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.520120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.533128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.533147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.548313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.548332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.570 [2024-12-09 05:35:45.561392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.570 [2024-12-09 05:35:45.561411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.576147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.576167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.590158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.590178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.604427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.604446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.616888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.616906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.631894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.631917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.646238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.646258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.830 [2024-12-09 05:35:45.660633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.830 [2024-12-09 05:35:45.660652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.674328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.674347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.688782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.688800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.704196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.704215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.718138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.718157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.732697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.732715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.748499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.748518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.762118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.762137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.776593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.776612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.790183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.790203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.804296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.804315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.831 [2024-12-09 05:35:45.818698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.831 [2024-12-09 05:35:45.818718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.833109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.833127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.848733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.848751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 17081.50 IOPS, 133.45 MiB/s [2024-12-09T04:35:46.088Z] [2024-12-09 05:35:45.863840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.863864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.878011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.878031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.892542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.892561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.904858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.904876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.919907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.919927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.934111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.934130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.948474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.948494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.962004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.962023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.976691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.976710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:45.992131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:45.992150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.005914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.005933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.020608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.020627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.033869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.033887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.048807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.048832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.064083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.064103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.091 [2024-12-09 05:35:46.078498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.091 [2024-12-09 05:35:46.078517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.092654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.092673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.108167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.108186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.121324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.121343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.136099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.136118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.149972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.149990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.164710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.164729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.180383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.180403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.193221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.193240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.208207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.208225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.351 [2024-12-09 05:35:46.222185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.351 [2024-12-09 05:35:46.222204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.236349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.236367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.249969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.249988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.264308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.264326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.278196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.278215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.292668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.292686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.307651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.307669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.322341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.322359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.352 [2024-12-09 05:35:46.336851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.352 [2024-12-09 05:35:46.336868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.352241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.352259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.366109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.366128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.380458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.380478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.394304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.394323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.408771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.408790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.424390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.424409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.438128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.438151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.452553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.452573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.466320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.466338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.480806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.480829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.495586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.495605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.509830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.509849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.524336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.524355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.538267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.538286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.552554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.552573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.564922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.564941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.580071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.580089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.612 [2024-12-09 05:35:46.594002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.612 [2024-12-09 05:35:46.594020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.608228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.872 [2024-12-09 05:35:46.608247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.621725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.872 [2024-12-09 05:35:46.621744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.636393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.872 [2024-12-09 05:35:46.636412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.647543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.872 [2024-12-09 05:35:46.647561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.661955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.872 [2024-12-09 05:35:46.661973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.872 [2024-12-09 05:35:46.676233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.676253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.689684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.689703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.704553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.704576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.717905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.717924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.732347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.732366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.746109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.746127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.761140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.761159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.776692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.776710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.791812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.791837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.806305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.806323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.820620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.820640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.834186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.834204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.848831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.848849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.873 [2024-12-09 05:35:46.864216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.873 [2024-12-09 05:35:46.864235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 17085.33 IOPS, 133.48 MiB/s [2024-12-09T04:35:47.130Z] [2024-12-09 05:35:46.878429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.878447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.893126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.893144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.908435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.908455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.921104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.921123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.935941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.935960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.950092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.950111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.964343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.964362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.975082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.975106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:46.989670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:46.989689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.004790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.004808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.020292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.020312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.034063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.034081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.048556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.048574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.061155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.061173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.075796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.075821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.090307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.090326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.104532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.104552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.133 [2024-12-09 05:35:47.116883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.133 [2024-12-09 05:35:47.116902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.393 [2024-12-09 05:35:47.132268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.393 [2024-12-09 05:35:47.132287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.393 [2024-12-09 05:35:47.146381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.393 [2024-12-09 05:35:47.146402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.393 [2024-12-09 05:35:47.160794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.393 [2024-12-09 05:35:47.160813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.393 [2024-12-09 05:35:47.176383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.393 [2024-12-09 05:35:47.176403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.393 [2024-12-09 05:35:47.190281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.393 [2024-12-09 05:35:47.190300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.204883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.204902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.220162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.220184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.234454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.234474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.249017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.249040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.264671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.264690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.280370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.280390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.293948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.293967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.308928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.308947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.323893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.323913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.338416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.338435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.352767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.352786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.368195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.368214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.394 [2024-12-09 05:35:47.381836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.394 [2024-12-09 05:35:47.381856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.396451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.396472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.410260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.410280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.424513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.424533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.437776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.437795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.452679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.452698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.468037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.468057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.481793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.481812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.496117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.496136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.509905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.509924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.524300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.524320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.538014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.538033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.552687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.552706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.568248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.568268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.582572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.582591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.597240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.597258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.612046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.612065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.626518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.626537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.641017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.641035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.672 [2024-12-09 05:35:47.656203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.672 [2024-12-09 05:35:47.656222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.669828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.669847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.684833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.684852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.700219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.700239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.714428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.714449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.728822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.728841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.743494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.743514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.758112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.758131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.772860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.772879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.788629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.788649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.801909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.801928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.816291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.816311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.827397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.827416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.842100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.842119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.857040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.857058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 17083.25 IOPS, 133.46 MiB/s [2024-12-09T04:35:47.929Z] [2024-12-09 05:35:47.872322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.872341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.885978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.885997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.901057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.901076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:33.932 [2024-12-09 05:35:47.916537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:33.932 [2024-12-09 05:35:47.916557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.191 [2024-12-09 05:35:47.930274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:47.930294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:47.944301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:47.944320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:47.958037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:47.958056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:47.972350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:47.972369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:47.985671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:47.985689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.000475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.000493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.014420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.014439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.029015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.029033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.044796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.044819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.060227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.060250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.074380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.074399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.088327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.088346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.102179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.102197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.116312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.116330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.130276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.130295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.144604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.144623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.158120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.158139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.192 [2024-12-09 05:35:48.172893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.192 [2024-12-09 05:35:48.172912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.188126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.188145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.202246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.202265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.216420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.216439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.228837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.228855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.244679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.244696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.260182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.260202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.274146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.451 [2024-12-09 05:35:48.274165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.451 [2024-12-09 05:35:48.288387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.288406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.301713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.301732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.316390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.316408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.329982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.330005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.344713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.344731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.360227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.360246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.373972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.373991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.388527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.388545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.402278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.402297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.417039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.417058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.452 [2024-12-09 05:35:48.432601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.452 [2024-12-09 05:35:48.432620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.446004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.446024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.460727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.460746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.476193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.476212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.490206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.490225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.504740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.504758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.519572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.519592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.534568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.534587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.548632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.548651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.560822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.560841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.576318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.576337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.589889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.589907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.603926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.603948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.618297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.618316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.633167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.633186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.648605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.648624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.662384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.662403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.677153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.677172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.712 [2024-12-09 05:35:48.692389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.712 [2024-12-09 05:35:48.692408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.705967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.705986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.720755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.720774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.736335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.736356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.750194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.750214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.765063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.765082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.780332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.780351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.794071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.794090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.808248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.808267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.821920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.821938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.836732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.836751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.852309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.852328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.866168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.866187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 17085.80 IOPS, 133.48 MiB/s 00:42:34.972 Latency(us) 00:42:34.972 [2024-12-09T04:35:48.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:34.972 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:34.972 Nvme1n1 : 5.01 17092.35 133.53 0.00 0.00 7482.66 2389.33 12506.45 00:42:34.972 [2024-12-09T04:35:48.969Z] =================================================================================================================== 00:42:34.972 [2024-12-09T04:35:48.969Z] Total : 17092.35 133.53 0.00 0.00 7482.66 2389.33 12506.45 00:42:34.972 [2024-12-09 05:35:48.876487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.876504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.888386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.888402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.900376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.900391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.912401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.912420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.924372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.924388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.936389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.936405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.948383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.948399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:34.972 [2024-12-09 05:35:48.960377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:34.972 [2024-12-09 05:35:48.960393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:48.972385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:48.972400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:48.984389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:48.984405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:48.996372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:48.996387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:49.008383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:49.008398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:49.020372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:49.020387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.232 [2024-12-09 05:35:49.032385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.232 [2024-12-09 05:35:49.032400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.044392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.044409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.056371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.056386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.068393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.068409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.080397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.080415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.092375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.092390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.104394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.104410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.116372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.116387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.128381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.128396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.140381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.140397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.152373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.152388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.164384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.164399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.176384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.176399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.188373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.188388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.200389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.200405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.212380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.212395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.233 [2024-12-09 05:35:49.224395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.233 [2024-12-09 05:35:49.224412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.236383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.236399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.248370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.248386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.260386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.260403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.272393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.272410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.284375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.284392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.296387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.296403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.308378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.308394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 [2024-12-09 05:35:49.320385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:35.493 [2024-12-09 05:35:49.320401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:35.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1901665) - No such process 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1901665 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.493 delay0 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.493 05:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:35.493 [2024-12-09 05:35:49.478265] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:42.078 Initializing NVMe Controllers 00:42:42.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:42.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:42.078 Initialization complete. Launching workers. 00:42:42.078 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 155 00:42:42.078 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 445, failed to submit 30 00:42:42.078 success 248, unsuccessful 197, failed 0 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:42.078 rmmod nvme_tcp 00:42:42.078 rmmod nvme_fabrics 00:42:42.078 rmmod nvme_keyring 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1899316 ']' 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1899316 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1899316 ']' 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1899316 00:42:42.078 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1899316 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1899316' 00:42:42.079 killing process with pid 1899316 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1899316 00:42:42.079 05:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1899316 00:42:42.649 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:42.650 05:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:44.557 00:42:44.557 real 0m35.126s 00:42:44.557 user 0m45.724s 00:42:44.557 sys 0m12.271s 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.557 ************************************ 00:42:44.557 END TEST nvmf_zcopy 00:42:44.557 ************************************ 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.557 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:44.817 ************************************ 00:42:44.817 START TEST nvmf_nmic 00:42:44.817 ************************************ 00:42:44.817 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:44.817 * Looking for test storage... 00:42:44.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:44.817 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.818 --rc genhtml_branch_coverage=1 00:42:44.818 --rc genhtml_function_coverage=1 00:42:44.818 --rc genhtml_legend=1 00:42:44.818 --rc geninfo_all_blocks=1 00:42:44.818 --rc geninfo_unexecuted_blocks=1 00:42:44.818 00:42:44.818 ' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.818 --rc genhtml_branch_coverage=1 00:42:44.818 --rc genhtml_function_coverage=1 00:42:44.818 --rc genhtml_legend=1 00:42:44.818 --rc geninfo_all_blocks=1 00:42:44.818 --rc geninfo_unexecuted_blocks=1 00:42:44.818 00:42:44.818 ' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.818 --rc genhtml_branch_coverage=1 00:42:44.818 --rc genhtml_function_coverage=1 00:42:44.818 --rc genhtml_legend=1 00:42:44.818 --rc geninfo_all_blocks=1 00:42:44.818 --rc geninfo_unexecuted_blocks=1 00:42:44.818 00:42:44.818 ' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.818 --rc genhtml_branch_coverage=1 00:42:44.818 --rc genhtml_function_coverage=1 00:42:44.818 --rc genhtml_legend=1 00:42:44.818 --rc geninfo_all_blocks=1 00:42:44.818 --rc geninfo_unexecuted_blocks=1 00:42:44.818 00:42:44.818 ' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:44.818 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:44.819 05:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:52.953 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:52.954 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:52.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:52.954 Found net devices under 0000:31:00.0: cvl_0_0 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:52.954 Found net devices under 0000:31:00.1: cvl_0_1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:52.954 05:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:52.954 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:52.954 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:52.954 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:52.954 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:52.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:52.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:42:52.955 00:42:52.955 --- 10.0.0.2 ping statistics --- 00:42:52.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.955 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:52.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:52.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:42:52.955 00:42:52.955 --- 10.0.0.1 ping statistics --- 00:42:52.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.955 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1908450 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1908450 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1908450 ']' 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:52.955 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.955 [2024-12-09 05:36:06.210121] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:52.955 [2024-12-09 05:36:06.212437] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:52.955 [2024-12-09 05:36:06.212522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:52.955 [2024-12-09 05:36:06.367949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:52.955 [2024-12-09 05:36:06.492641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:52.955 [2024-12-09 05:36:06.492706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:52.955 [2024-12-09 05:36:06.492722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:52.955 [2024-12-09 05:36:06.492732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:52.955 [2024-12-09 05:36:06.492745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:52.955 [2024-12-09 05:36:06.495701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.955 [2024-12-09 05:36:06.495858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.955 [2024-12-09 05:36:06.495925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.955 [2024-12-09 05:36:06.495929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:52.955 [2024-12-09 05:36:06.790363] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:52.955 [2024-12-09 05:36:06.791164] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:52.955 [2024-12-09 05:36:06.791933] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:52.955 [2024-12-09 05:36:06.792022] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:52.955 [2024-12-09 05:36:06.792261] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:53.217 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:53.217 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:53.217 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:53.217 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:53.217 05:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 [2024-12-09 05:36:07.029371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 Malloc0 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 [2024-12-09 05:36:07.165444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:53.217 test case1: single bdev can't be used in multiple subsystems 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.217 [2024-12-09 05:36:07.200834] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:53.217 [2024-12-09 05:36:07.200888] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:53.217 [2024-12-09 05:36:07.200912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:53.217 request: 00:42:53.217 { 00:42:53.217 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:53.217 "namespace": { 00:42:53.217 "bdev_name": "Malloc0", 00:42:53.217 "no_auto_visible": false, 00:42:53.217 "hide_metadata": false 00:42:53.217 }, 00:42:53.217 "method": "nvmf_subsystem_add_ns", 00:42:53.217 "req_id": 1 00:42:53.217 } 00:42:53.217 Got JSON-RPC error response 00:42:53.217 response: 00:42:53.217 { 00:42:53.217 "code": -32602, 00:42:53.217 "message": "Invalid parameters" 00:42:53.217 } 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:53.217 Adding namespace failed - expected result. 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:53.217 test case2: host connect to nvmf target in multiple paths 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:53.217 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.477 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.477 [2024-12-09 05:36:07.213042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:53.477 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.477 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:54.047 05:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:54.307 05:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:54.307 05:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:54.307 05:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:54.307 05:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:54.307 05:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:56.849 05:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:56.849 [global] 00:42:56.849 thread=1 00:42:56.849 invalidate=1 00:42:56.849 rw=write 00:42:56.849 time_based=1 00:42:56.849 runtime=1 00:42:56.849 ioengine=libaio 00:42:56.849 direct=1 00:42:56.849 bs=4096 00:42:56.849 iodepth=1 00:42:56.849 norandommap=0 00:42:56.849 numjobs=1 00:42:56.849 00:42:56.849 verify_dump=1 00:42:56.849 verify_backlog=512 00:42:56.849 verify_state_save=0 00:42:56.849 do_verify=1 00:42:56.849 verify=crc32c-intel 00:42:56.849 [job0] 00:42:56.849 filename=/dev/nvme0n1 00:42:56.849 Could not set queue depth (nvme0n1) 00:42:56.849 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:56.849 fio-3.35 00:42:56.849 Starting 1 thread 00:42:58.231 00:42:58.231 job0: (groupid=0, jobs=1): err= 0: pid=1909608: Mon Dec 9 05:36:11 2024 00:42:58.231 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:42:58.231 slat (nsec): min=25870, max=45792, avg=27306.11, stdev=4487.42 00:42:58.231 clat (usec): min=888, max=41369, avg=38874.79, stdev=9199.43 00:42:58.231 lat (usec): min=914, max=41415, avg=38902.09, stdev=9199.78 00:42:58.231 clat percentiles (usec): 00:42:58.231 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41157], 00:42:58.231 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:58.231 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:58.231 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:58.231 | 99.99th=[41157] 00:42:58.231 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:42:58.231 slat (usec): min=9, max=30705, avg=89.36, stdev=1355.74 00:42:58.231 clat (usec): min=184, max=761, avg=432.26, stdev=121.55 00:42:58.231 lat (usec): min=198, max=31416, avg=521.62, stdev=1373.25 00:42:58.231 clat percentiles (usec): 00:42:58.231 | 1.00th=[ 204], 5.00th=[ 262], 10.00th=[ 302], 20.00th=[ 318], 00:42:58.231 | 30.00th=[ 338], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 453], 00:42:58.231 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 652], 00:42:58.231 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 758], 00:42:58.231 | 99.99th=[ 758] 00:42:58.231 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:58.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:58.231 lat (usec) : 250=4.52%, 500=61.02%, 750=30.70%, 1000=0.38% 00:42:58.231 lat (msec) : 50=3.39% 00:42:58.231 cpu : usr=0.50%, sys=1.88%, ctx=534, majf=0, minf=1 00:42:58.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:58.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:58.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:58.231 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:58.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:58.231 00:42:58.231 Run status group 0 (all jobs): 00:42:58.231 READ: bw=75.2KiB/s (77.0kB/s), 75.2KiB/s-75.2KiB/s (77.0kB/s-77.0kB/s), io=76.0KiB (77.8kB), run=1011-1011msec 00:42:58.231 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:42:58.231 00:42:58.231 Disk stats (read/write): 00:42:58.231 nvme0n1: ios=41/512, merge=0/0, ticks=1579/217, in_queue=1796, util=98.80% 00:42:58.231 05:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:58.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:58.231 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:58.232 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:58.232 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:58.232 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:58.232 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:58.232 rmmod nvme_tcp 00:42:58.492 rmmod nvme_fabrics 00:42:58.492 rmmod nvme_keyring 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1908450 ']' 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1908450 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1908450 ']' 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1908450 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908450 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908450' 00:42:58.492 killing process with pid 1908450 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1908450 00:42:58.492 05:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1908450 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:59.062 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:59.063 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:59.063 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.063 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:59.063 05:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:01.604 00:43:01.604 real 0m16.558s 00:43:01.604 user 0m36.706s 00:43:01.604 sys 0m7.500s 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:01.604 ************************************ 00:43:01.604 END TEST nvmf_nmic 00:43:01.604 ************************************ 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:01.604 ************************************ 00:43:01.604 START TEST nvmf_fio_target 00:43:01.604 ************************************ 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:01.604 * Looking for test storage... 00:43:01.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.604 --rc genhtml_branch_coverage=1 00:43:01.604 --rc genhtml_function_coverage=1 00:43:01.604 --rc genhtml_legend=1 00:43:01.604 --rc geninfo_all_blocks=1 00:43:01.604 --rc geninfo_unexecuted_blocks=1 00:43:01.604 00:43:01.604 ' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.604 --rc genhtml_branch_coverage=1 00:43:01.604 --rc genhtml_function_coverage=1 00:43:01.604 --rc genhtml_legend=1 00:43:01.604 --rc geninfo_all_blocks=1 00:43:01.604 --rc geninfo_unexecuted_blocks=1 00:43:01.604 00:43:01.604 ' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.604 --rc genhtml_branch_coverage=1 00:43:01.604 --rc genhtml_function_coverage=1 00:43:01.604 --rc genhtml_legend=1 00:43:01.604 --rc geninfo_all_blocks=1 00:43:01.604 --rc geninfo_unexecuted_blocks=1 00:43:01.604 00:43:01.604 ' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.604 --rc genhtml_branch_coverage=1 00:43:01.604 --rc genhtml_function_coverage=1 00:43:01.604 --rc genhtml_legend=1 00:43:01.604 --rc geninfo_all_blocks=1 00:43:01.604 --rc geninfo_unexecuted_blocks=1 00:43:01.604 00:43:01.604 ' 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:01.604 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:01.605 05:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:09.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:09.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:09.735 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:09.736 Found net devices under 0000:31:00.0: cvl_0_0 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:09.736 Found net devices under 0000:31:00.1: cvl_0_1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:09.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:09.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:43:09.736 00:43:09.736 --- 10.0.0.2 ping statistics --- 00:43:09.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:09.736 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:09.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:09.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:43:09.736 00:43:09.736 --- 10.0.0.1 ping statistics --- 00:43:09.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:09.736 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1914495 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1914495 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1914495 ']' 00:43:09.736 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.737 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:09.737 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.737 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:09.737 05:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.737 [2024-12-09 05:36:22.793866] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:09.737 [2024-12-09 05:36:22.796608] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:09.737 [2024-12-09 05:36:22.796711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:09.737 [2024-12-09 05:36:22.949146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:09.737 [2024-12-09 05:36:23.073766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:09.737 [2024-12-09 05:36:23.073838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:09.737 [2024-12-09 05:36:23.073854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:09.737 [2024-12-09 05:36:23.073865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:09.737 [2024-12-09 05:36:23.073877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:09.737 [2024-12-09 05:36:23.076836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.737 [2024-12-09 05:36:23.076945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:09.737 [2024-12-09 05:36:23.077008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.737 [2024-12-09 05:36:23.077035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:09.737 [2024-12-09 05:36:23.370276] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:09.737 [2024-12-09 05:36:23.371057] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:09.737 [2024-12-09 05:36:23.371798] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:09.737 [2024-12-09 05:36:23.371892] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:09.737 [2024-12-09 05:36:23.372130] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:09.737 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:09.996 [2024-12-09 05:36:23.758450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:09.996 05:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:10.255 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:10.255 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:10.516 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:10.516 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:10.776 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:10.776 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:11.036 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:11.036 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:11.036 05:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:11.307 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:11.307 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:11.568 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:11.568 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:11.829 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:11.829 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:12.090 05:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:12.090 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:12.090 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:12.350 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:12.350 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:12.610 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:12.610 [2024-12-09 05:36:26.554176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:12.610 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:12.870 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:13.130 05:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:13.389 05:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:15.933 05:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:15.933 [global] 00:43:15.933 thread=1 00:43:15.933 invalidate=1 00:43:15.933 rw=write 00:43:15.933 time_based=1 00:43:15.933 runtime=1 00:43:15.933 ioengine=libaio 00:43:15.933 direct=1 00:43:15.933 bs=4096 00:43:15.933 iodepth=1 00:43:15.933 norandommap=0 00:43:15.933 numjobs=1 00:43:15.933 00:43:15.933 verify_dump=1 00:43:15.933 verify_backlog=512 00:43:15.933 verify_state_save=0 00:43:15.933 do_verify=1 00:43:15.933 verify=crc32c-intel 00:43:15.933 [job0] 00:43:15.933 filename=/dev/nvme0n1 00:43:15.933 [job1] 00:43:15.933 filename=/dev/nvme0n2 00:43:15.933 [job2] 00:43:15.933 filename=/dev/nvme0n3 00:43:15.933 [job3] 00:43:15.933 filename=/dev/nvme0n4 00:43:15.933 Could not set queue depth (nvme0n1) 00:43:15.933 Could not set queue depth (nvme0n2) 00:43:15.933 Could not set queue depth (nvme0n3) 00:43:15.933 Could not set queue depth (nvme0n4) 00:43:15.933 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.933 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.933 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.933 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.933 fio-3.35 00:43:15.933 Starting 4 threads 00:43:17.450 00:43:17.450 job0: (groupid=0, jobs=1): err= 0: pid=1915962: Mon Dec 9 05:36:31 2024 00:43:17.450 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:17.450 slat (nsec): min=6649, max=55502, avg=25762.57, stdev=5559.31 00:43:17.450 clat (usec): min=689, max=1270, avg=959.01, stdev=74.41 00:43:17.450 lat (usec): min=696, max=1298, avg=984.77, stdev=76.41 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[ 766], 5.00th=[ 824], 10.00th=[ 857], 20.00th=[ 898], 00:43:17.450 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:43:17.450 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:43:17.450 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1270], 99.95th=[ 1270], 00:43:17.450 | 99.99th=[ 1270] 00:43:17.450 write: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec); 0 zone resets 00:43:17.450 slat (nsec): min=9444, max=70701, avg=29695.99, stdev=11230.44 00:43:17.450 clat (usec): min=184, max=1025, avg=592.92, stdev=121.03 00:43:17.450 lat (usec): min=195, max=1061, avg=622.61, stdev=126.35 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[ 310], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 486], 00:43:17.450 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 635], 00:43:17.450 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 775], 00:43:17.450 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 1029], 99.95th=[ 1029], 00:43:17.450 | 99.99th=[ 1029] 00:43:17.450 bw ( KiB/s): min= 4096, max= 4096, per=42.27%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.450 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.450 lat (usec) : 250=0.15%, 500=13.39%, 750=42.57%, 1000=33.05% 00:43:17.450 lat (msec) : 2=10.84% 00:43:17.450 cpu : usr=2.70%, sys=4.70%, ctx=1295, majf=0, minf=1 00:43:17.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.450 issued rwts: total=512,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.450 job1: (groupid=0, jobs=1): err= 0: pid=1915972: Mon Dec 9 05:36:31 2024 00:43:17.450 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:17.450 slat (nsec): min=24240, max=59072, avg=25428.60, stdev=3297.44 00:43:17.450 clat (usec): min=800, max=1450, avg=1103.30, stdev=87.38 00:43:17.450 lat (usec): min=825, max=1475, avg=1128.73, stdev=87.40 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1037], 00:43:17.450 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:43:17.450 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:43:17.450 | 99.00th=[ 1352], 99.50th=[ 1369], 99.90th=[ 1450], 99.95th=[ 1450], 00:43:17.450 | 99.99th=[ 1450] 00:43:17.450 write: IOPS=661, BW=2645KiB/s (2709kB/s)(2648KiB/1001msec); 0 zone resets 00:43:17.450 slat (nsec): min=9532, max=65699, avg=29828.58, stdev=8776.03 00:43:17.450 clat (usec): min=207, max=1449, avg=594.02, stdev=126.32 00:43:17.450 lat (usec): min=219, max=1482, avg=623.85, stdev=129.15 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[ 306], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 490], 00:43:17.450 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:43:17.450 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:43:17.450 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 1450], 99.95th=[ 1450], 00:43:17.450 | 99.99th=[ 1450] 00:43:17.450 bw ( KiB/s): min= 4096, max= 4096, per=42.27%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.450 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.450 lat (usec) : 250=0.17%, 500=12.35%, 750=39.01%, 1000=9.20% 00:43:17.450 lat (msec) : 2=39.27% 00:43:17.450 cpu : usr=1.30%, sys=3.80%, ctx=1174, majf=0, minf=2 00:43:17.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.450 issued rwts: total=512,662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.450 job2: (groupid=0, jobs=1): err= 0: pid=1915987: Mon Dec 9 05:36:31 2024 00:43:17.450 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1006msec) 00:43:17.450 slat (nsec): min=26562, max=45333, avg=27971.12, stdev=4632.62 00:43:17.450 clat (usec): min=41504, max=42049, avg=41935.08, stdev=121.24 00:43:17.450 lat (usec): min=41549, max=42076, avg=41963.05, stdev=116.86 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:43:17.450 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:43:17.450 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:17.450 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:17.450 | 99.99th=[42206] 00:43:17.450 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:43:17.450 slat (nsec): min=9482, max=54254, avg=31851.81, stdev=8392.57 00:43:17.450 clat (usec): min=170, max=1101, avg=614.92, stdev=124.08 00:43:17.450 lat (usec): min=186, max=1113, avg=646.77, stdev=126.55 00:43:17.450 clat percentiles (usec): 00:43:17.450 | 1.00th=[ 285], 5.00th=[ 412], 10.00th=[ 457], 20.00th=[ 510], 00:43:17.450 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:43:17.450 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:43:17.450 | 99.00th=[ 840], 99.50th=[ 881], 99.90th=[ 1106], 99.95th=[ 1106], 00:43:17.450 | 99.99th=[ 1106] 00:43:17.450 bw ( KiB/s): min= 4096, max= 4096, per=42.27%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.450 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.450 lat (usec) : 250=0.76%, 500=17.42%, 750=67.61%, 1000=10.98% 00:43:17.450 lat (msec) : 2=0.19%, 50=3.03% 00:43:17.451 cpu : usr=1.19%, sys=1.59%, ctx=528, majf=0, minf=1 00:43:17.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.451 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.451 job3: (groupid=0, jobs=1): err= 0: pid=1915993: Mon Dec 9 05:36:31 2024 00:43:17.451 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1018msec) 00:43:17.451 slat (nsec): min=26411, max=27700, avg=26699.94, stdev=300.58 00:43:17.451 clat (usec): min=40992, max=42041, avg=41863.01, stdev=262.07 00:43:17.451 lat (usec): min=41019, max=42067, avg=41889.71, stdev=262.00 00:43:17.451 clat percentiles (usec): 00:43:17.451 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:43:17.451 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:17.451 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:17.451 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:17.451 | 99.99th=[42206] 00:43:17.451 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:43:17.451 slat (nsec): min=9417, max=55261, avg=30620.36, stdev=9244.08 00:43:17.451 clat (usec): min=281, max=1127, avg=642.14, stdev=127.56 00:43:17.451 lat (usec): min=314, max=1161, avg=672.76, stdev=130.55 00:43:17.451 clat percentiles (usec): 00:43:17.451 | 1.00th=[ 334], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 537], 00:43:17.451 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 668], 60.00th=[ 693], 00:43:17.451 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 824], 00:43:17.451 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1123], 99.95th=[ 1123], 00:43:17.451 | 99.99th=[ 1123] 00:43:17.451 bw ( KiB/s): min= 4096, max= 4096, per=42.27%, avg=4096.00, stdev= 0.00, samples=1 00:43:17.451 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:17.451 lat (usec) : 500=14.39%, 750=66.67%, 1000=15.72% 00:43:17.451 lat (msec) : 2=0.19%, 50=3.03% 00:43:17.451 cpu : usr=0.98%, sys=1.57%, ctx=528, majf=0, minf=2 00:43:17.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:17.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.451 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:17.451 00:43:17.451 Run status group 0 (all jobs): 00:43:17.451 READ: bw=4149KiB/s (4249kB/s), 62.9KiB/s-2046KiB/s (64.4kB/s-2095kB/s), io=4224KiB (4325kB), run=1001-1018msec 00:43:17.451 WRITE: bw=9690KiB/s (9922kB/s), 2012KiB/s-3117KiB/s (2060kB/s-3192kB/s), io=9864KiB (10.1MB), run=1001-1018msec 00:43:17.451 00:43:17.451 Disk stats (read/write): 00:43:17.451 nvme0n1: ios=536/512, merge=0/0, ticks=1457/252, in_queue=1709, util=96.79% 00:43:17.451 nvme0n2: ios=490/512, merge=0/0, ticks=620/291, in_queue=911, util=92.45% 00:43:17.451 nvme0n3: ios=12/512, merge=0/0, ticks=503/265, in_queue=768, util=88.51% 00:43:17.451 nvme0n4: ios=11/512, merge=0/0, ticks=460/289, in_queue=749, util=89.55% 00:43:17.451 05:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:17.451 [global] 00:43:17.451 thread=1 00:43:17.451 invalidate=1 00:43:17.451 rw=randwrite 00:43:17.451 time_based=1 00:43:17.451 runtime=1 00:43:17.451 ioengine=libaio 00:43:17.451 direct=1 00:43:17.451 bs=4096 00:43:17.451 iodepth=1 00:43:17.451 norandommap=0 00:43:17.451 numjobs=1 00:43:17.451 00:43:17.451 verify_dump=1 00:43:17.451 verify_backlog=512 00:43:17.451 verify_state_save=0 00:43:17.451 do_verify=1 00:43:17.451 verify=crc32c-intel 00:43:17.451 [job0] 00:43:17.451 filename=/dev/nvme0n1 00:43:17.451 [job1] 00:43:17.451 filename=/dev/nvme0n2 00:43:17.451 [job2] 00:43:17.451 filename=/dev/nvme0n3 00:43:17.451 [job3] 00:43:17.451 filename=/dev/nvme0n4 00:43:17.451 Could not set queue depth (nvme0n1) 00:43:17.451 Could not set queue depth (nvme0n2) 00:43:17.451 Could not set queue depth (nvme0n3) 00:43:17.451 Could not set queue depth (nvme0n4) 00:43:17.753 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:17.753 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:17.753 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:17.753 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:17.753 fio-3.35 00:43:17.753 Starting 4 threads 00:43:18.717 00:43:18.717 job0: (groupid=0, jobs=1): err= 0: pid=1916387: Mon Dec 9 05:36:32 2024 00:43:18.717 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:43:18.717 slat (nsec): min=26709, max=27486, avg=27158.50, stdev=188.02 00:43:18.717 clat (usec): min=40966, max=42031, avg=41848.36, stdev=324.42 00:43:18.717 lat (usec): min=40993, max=42059, avg=41875.52, stdev=324.33 00:43:18.717 clat percentiles (usec): 00:43:18.717 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:43:18.717 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:18.717 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:18.717 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:18.717 | 99.99th=[42206] 00:43:18.717 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:43:18.717 slat (nsec): min=9393, max=66750, avg=30005.70, stdev=10002.40 00:43:18.717 clat (usec): min=185, max=1024, avg=611.53, stdev=134.33 00:43:18.717 lat (usec): min=196, max=1058, avg=641.53, stdev=139.31 00:43:18.717 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 302], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 510], 00:43:18.718 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:43:18.718 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 824], 00:43:18.718 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1029], 99.95th=[ 1029], 00:43:18.718 | 99.99th=[ 1029] 00:43:18.718 bw ( KiB/s): min= 4096, max= 4096, per=33.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:18.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:18.718 lat (usec) : 250=0.38%, 500=17.23%, 750=66.67%, 1000=12.50% 00:43:18.718 lat (msec) : 2=0.19%, 50=3.03% 00:43:18.718 cpu : usr=0.80%, sys=2.10%, ctx=529, majf=0, minf=1 00:43:18.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.718 job1: (groupid=0, jobs=1): err= 0: pid=1916404: Mon Dec 9 05:36:32 2024 00:43:18.718 read: IOPS=516, BW=2066KiB/s (2115kB/s)(2080KiB/1007msec) 00:43:18.718 slat (nsec): min=7037, max=54688, avg=23901.57, stdev=7763.59 00:43:18.718 clat (usec): min=374, max=41585, avg=1023.02, stdev=3070.37 00:43:18.718 lat (usec): min=401, max=41602, avg=1046.92, stdev=3070.58 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 519], 5.00th=[ 611], 10.00th=[ 676], 20.00th=[ 725], 00:43:18.718 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 824], 00:43:18.718 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 914], 00:43:18.718 | 99.00th=[ 1057], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:43:18.718 | 99.99th=[41681] 00:43:18.718 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:43:18.718 slat (nsec): min=9689, max=54617, avg=27643.06, stdev=10593.98 00:43:18.718 clat (usec): min=137, max=680, avg=411.57, stdev=87.26 00:43:18.718 lat (usec): min=150, max=713, avg=439.21, stdev=91.99 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 202], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 334], 00:43:18.718 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 424], 60.00th=[ 445], 00:43:18.718 | 70.00th=[ 465], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[ 545], 00:43:18.718 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 685], 99.95th=[ 685], 00:43:18.718 | 99.99th=[ 685] 00:43:18.718 bw ( KiB/s): min= 4096, max= 4096, per=33.57%, avg=4096.00, stdev= 0.00, samples=2 00:43:18.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:43:18.718 lat (usec) : 250=1.55%, 500=55.83%, 750=18.20%, 1000=23.83% 00:43:18.718 lat (msec) : 2=0.39%, 50=0.19% 00:43:18.718 cpu : usr=2.09%, sys=4.27%, ctx=1546, majf=0, minf=1 00:43:18.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.718 job2: (groupid=0, jobs=1): err= 0: pid=1916422: Mon Dec 9 05:36:32 2024 00:43:18.718 read: IOPS=651, BW=2605KiB/s (2668kB/s)(2608KiB/1001msec) 00:43:18.718 slat (nsec): min=7430, max=46413, avg=24187.34, stdev=8298.27 00:43:18.718 clat (usec): min=478, max=961, avg=773.91, stdev=75.97 00:43:18.718 lat (usec): min=487, max=989, avg=798.09, stdev=78.30 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 537], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 709], 00:43:18.718 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 799], 00:43:18.718 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 873], 00:43:18.718 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:43:18.718 | 99.99th=[ 963] 00:43:18.718 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:18.718 slat (nsec): min=9746, max=69723, avg=28455.06, stdev=10969.02 00:43:18.718 clat (usec): min=228, max=718, avg=428.28, stdev=86.57 00:43:18.718 lat (usec): min=239, max=752, avg=456.74, stdev=91.73 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 347], 00:43:18.718 | 30.00th=[ 379], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 453], 00:43:18.718 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 562], 00:43:18.718 | 99.00th=[ 652], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 717], 00:43:18.718 | 99.99th=[ 717] 00:43:18.718 bw ( KiB/s): min= 4096, max= 4096, per=33.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:18.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:18.718 lat (usec) : 250=0.36%, 500=49.34%, 750=23.27%, 1000=27.03% 00:43:18.718 cpu : usr=2.70%, sys=4.20%, ctx=1677, majf=0, minf=1 00:43:18.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 issued rwts: total=652,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.718 job3: (groupid=0, jobs=1): err= 0: pid=1916428: Mon Dec 9 05:36:32 2024 00:43:18.718 read: IOPS=236, BW=946KiB/s (969kB/s)(952KiB/1006msec) 00:43:18.718 slat (nsec): min=7547, max=49573, avg=25953.12, stdev=7537.56 00:43:18.718 clat (usec): min=476, max=42094, avg=3188.67, stdev=9566.20 00:43:18.718 lat (usec): min=504, max=42122, avg=3214.62, stdev=9566.29 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 529], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 709], 00:43:18.718 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 840], 00:43:18.718 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 1037], 95.00th=[41157], 00:43:18.718 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:18.718 | 99.99th=[42206] 00:43:18.718 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:43:18.718 slat (nsec): min=9980, max=57730, avg=25862.70, stdev=11631.56 00:43:18.718 clat (usec): min=159, max=758, avg=433.02, stdev=90.52 00:43:18.718 lat (usec): min=194, max=769, avg=458.89, stdev=95.18 00:43:18.718 clat percentiles (usec): 00:43:18.718 | 1.00th=[ 249], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 347], 00:43:18.718 | 30.00th=[ 375], 40.00th=[ 420], 50.00th=[ 449], 60.00th=[ 465], 00:43:18.718 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 562], 00:43:18.718 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 758], 99.95th=[ 758], 00:43:18.718 | 99.99th=[ 758] 00:43:18.718 bw ( KiB/s): min= 4096, max= 4096, per=33.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:18.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:18.718 lat (usec) : 250=1.07%, 500=50.93%, 750=25.87%, 1000=18.93% 00:43:18.718 lat (msec) : 2=1.20%, 4=0.13%, 50=1.87% 00:43:18.718 cpu : usr=1.29%, sys=1.69%, ctx=751, majf=0, minf=1 00:43:18.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.718 issued rwts: total=238,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.718 00:43:18.718 Run status group 0 (all jobs): 00:43:18.718 READ: bw=5664KiB/s (5800kB/s), 63.8KiB/s-2605KiB/s (65.3kB/s-2668kB/s), io=5704KiB (5841kB), run=1001-1007msec 00:43:18.718 WRITE: bw=11.9MiB/s (12.5MB/s), 2036KiB/s-4092KiB/s (2085kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1007msec 00:43:18.718 00:43:18.718 Disk stats (read/write): 00:43:18.718 nvme0n1: ios=53/512, merge=0/0, ticks=550/243, in_queue=793, util=87.37% 00:43:18.718 nvme0n2: ios=555/951, merge=0/0, ticks=498/383, in_queue=881, util=91.24% 00:43:18.718 nvme0n3: ios=573/929, merge=0/0, ticks=1021/378, in_queue=1399, util=93.18% 00:43:18.718 nvme0n4: ios=253/512, merge=0/0, ticks=762/219, in_queue=981, util=94.70% 00:43:18.718 05:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:18.977 [global] 00:43:18.977 thread=1 00:43:18.977 invalidate=1 00:43:18.977 rw=write 00:43:18.977 time_based=1 00:43:18.977 runtime=1 00:43:18.977 ioengine=libaio 00:43:18.977 direct=1 00:43:18.977 bs=4096 00:43:18.977 iodepth=128 00:43:18.977 norandommap=0 00:43:18.977 numjobs=1 00:43:18.977 00:43:18.977 verify_dump=1 00:43:18.977 verify_backlog=512 00:43:18.977 verify_state_save=0 00:43:18.977 do_verify=1 00:43:18.977 verify=crc32c-intel 00:43:18.977 [job0] 00:43:18.977 filename=/dev/nvme0n1 00:43:18.977 [job1] 00:43:18.977 filename=/dev/nvme0n2 00:43:18.977 [job2] 00:43:18.977 filename=/dev/nvme0n3 00:43:18.977 [job3] 00:43:18.977 filename=/dev/nvme0n4 00:43:18.977 Could not set queue depth (nvme0n1) 00:43:18.977 Could not set queue depth (nvme0n2) 00:43:18.977 Could not set queue depth (nvme0n3) 00:43:18.977 Could not set queue depth (nvme0n4) 00:43:19.237 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.237 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.237 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.237 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:19.237 fio-3.35 00:43:19.237 Starting 4 threads 00:43:20.616 00:43:20.616 job0: (groupid=0, jobs=1): err= 0: pid=1916866: Mon Dec 9 05:36:34 2024 00:43:20.616 read: IOPS=9213, BW=36.0MiB/s (37.7MB/s)(36.2MiB/1005msec) 00:43:20.616 slat (nsec): min=995, max=6319.6k, avg=52582.26, stdev=399013.30 00:43:20.616 clat (usec): min=2516, max=16028, avg=7160.33, stdev=1794.15 00:43:20.616 lat (usec): min=2521, max=16031, avg=7212.91, stdev=1812.84 00:43:20.616 clat percentiles (usec): 00:43:20.616 | 1.00th=[ 3425], 5.00th=[ 4621], 10.00th=[ 5276], 20.00th=[ 5800], 00:43:20.616 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7177], 00:43:20.616 | 70.00th=[ 7701], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10683], 00:43:20.616 | 99.00th=[12256], 99.50th=[12780], 99.90th=[14091], 99.95th=[14091], 00:43:20.616 | 99.99th=[16057] 00:43:20.616 write: IOPS=9679, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1005msec); 0 zone resets 00:43:20.616 slat (nsec): min=1690, max=5663.6k, avg=47804.16, stdev=341883.88 00:43:20.616 clat (usec): min=1180, max=13231, avg=6264.80, stdev=1693.32 00:43:20.616 lat (usec): min=1191, max=13236, avg=6312.60, stdev=1705.05 00:43:20.616 clat percentiles (usec): 00:43:20.616 | 1.00th=[ 2769], 5.00th=[ 3720], 10.00th=[ 4047], 20.00th=[ 4424], 00:43:20.616 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6390], 60.00th=[ 6783], 00:43:20.616 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 8848], 95.00th=[ 9241], 00:43:20.616 | 99.00th=[10290], 99.50th=[10552], 99.90th=[12780], 99.95th=[13173], 00:43:20.616 | 99.99th=[13173] 00:43:20.616 bw ( KiB/s): min=36200, max=40878, per=39.26%, avg=38539.00, stdev=3307.85, samples=2 00:43:20.616 iops : min= 9050, max=10219, avg=9634.50, stdev=826.61, samples=2 00:43:20.616 lat (msec) : 2=0.12%, 4=5.78%, 10=88.76%, 20=5.34% 00:43:20.616 cpu : usr=6.37%, sys=8.86%, ctx=652, majf=0, minf=1 00:43:20.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:43:20.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.616 issued rwts: total=9260,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.616 job1: (groupid=0, jobs=1): err= 0: pid=1916869: Mon Dec 9 05:36:34 2024 00:43:20.616 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:43:20.616 slat (nsec): min=1021, max=16809k, avg=72848.67, stdev=618508.20 00:43:20.616 clat (usec): min=3106, max=54055, avg=10046.55, stdev=4719.27 00:43:20.616 lat (usec): min=3113, max=54062, avg=10119.40, stdev=4753.58 00:43:20.616 clat percentiles (usec): 00:43:20.616 | 1.00th=[ 5276], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 6783], 00:43:20.616 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8979], 60.00th=[ 9503], 00:43:20.616 | 70.00th=[10421], 80.00th=[11863], 90.00th=[16057], 95.00th=[20579], 00:43:20.616 | 99.00th=[28443], 99.50th=[30016], 99.90th=[41157], 99.95th=[41157], 00:43:20.616 | 99.99th=[54264] 00:43:20.616 write: IOPS=6428, BW=25.1MiB/s (26.3MB/s)(25.3MiB/1009msec); 0 zone resets 00:43:20.616 slat (nsec): min=1711, max=30169k, avg=79949.73, stdev=773396.37 00:43:20.616 clat (usec): min=751, max=57389, avg=9581.98, stdev=5936.55 00:43:20.616 lat (usec): min=760, max=57402, avg=9661.93, stdev=6013.34 00:43:20.616 clat percentiles (usec): 00:43:20.616 | 1.00th=[ 2704], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5997], 00:43:20.616 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 8160], 00:43:20.616 | 70.00th=[ 9241], 80.00th=[11731], 90.00th=[19268], 95.00th=[24249], 00:43:20.616 | 99.00th=[27132], 99.50th=[27395], 99.90th=[40109], 99.95th=[41681], 00:43:20.616 | 99.99th=[57410] 00:43:20.616 bw ( KiB/s): min=23153, max=27672, per=25.89%, avg=25412.50, stdev=3195.42, samples=2 00:43:20.616 iops : min= 5788, max= 6918, avg=6353.00, stdev=799.03, samples=2 00:43:20.616 lat (usec) : 1000=0.05% 00:43:20.616 lat (msec) : 2=0.20%, 4=1.87%, 10=69.08%, 20=22.25%, 50=6.54% 00:43:20.616 lat (msec) : 100=0.02% 00:43:20.616 cpu : usr=3.08%, sys=7.34%, ctx=404, majf=0, minf=2 00:43:20.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:20.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.617 issued rwts: total=6144,6486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.617 job2: (groupid=0, jobs=1): err= 0: pid=1916884: Mon Dec 9 05:36:34 2024 00:43:20.617 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:43:20.617 slat (nsec): min=1028, max=21184k, avg=144723.41, stdev=1110766.92 00:43:20.617 clat (usec): min=4132, max=56283, avg=19137.20, stdev=10038.81 00:43:20.617 lat (usec): min=4137, max=56311, avg=19281.92, stdev=10134.77 00:43:20.617 clat percentiles (usec): 00:43:20.617 | 1.00th=[ 6915], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[ 9634], 00:43:20.617 | 30.00th=[11600], 40.00th=[13173], 50.00th=[16188], 60.00th=[22152], 00:43:20.617 | 70.00th=[23200], 80.00th=[27919], 90.00th=[33162], 95.00th=[38536], 00:43:20.617 | 99.00th=[49021], 99.50th=[51119], 99.90th=[51119], 99.95th=[51643], 00:43:20.617 | 99.99th=[56361] 00:43:20.617 write: IOPS=3114, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1005msec); 0 zone resets 00:43:20.617 slat (nsec): min=1721, max=16589k, avg=170662.51, stdev=1061294.87 00:43:20.617 clat (usec): min=1106, max=84524, avg=21960.39, stdev=16822.28 00:43:20.617 lat (usec): min=1117, max=84553, avg=22131.05, stdev=16945.12 00:43:20.617 clat percentiles (usec): 00:43:20.617 | 1.00th=[ 5145], 5.00th=[ 5866], 10.00th=[ 7242], 20.00th=[ 9896], 00:43:20.617 | 30.00th=[12911], 40.00th=[13960], 50.00th=[16057], 60.00th=[19006], 00:43:20.617 | 70.00th=[24249], 80.00th=[32113], 90.00th=[41157], 95.00th=[64226], 00:43:20.617 | 99.00th=[79168], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:43:20.617 | 99.99th=[84411] 00:43:20.617 bw ( KiB/s): min=12247, max=12304, per=12.51%, avg=12275.50, stdev=40.31, samples=2 00:43:20.617 iops : min= 3061, max= 3076, avg=3068.50, stdev=10.61, samples=2 00:43:20.617 lat (msec) : 2=0.08%, 4=0.19%, 10=22.54%, 20=37.13%, 50=36.47% 00:43:20.617 lat (msec) : 100=3.58% 00:43:20.617 cpu : usr=2.59%, sys=3.59%, ctx=231, majf=0, minf=2 00:43:20.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:43:20.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.617 issued rwts: total=3072,3130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.617 job3: (groupid=0, jobs=1): err= 0: pid=1916890: Mon Dec 9 05:36:34 2024 00:43:20.617 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:43:20.617 slat (nsec): min=1027, max=15771k, avg=91678.95, stdev=711090.65 00:43:20.617 clat (usec): min=2940, max=69822, avg=11957.54, stdev=7535.97 00:43:20.617 lat (usec): min=2945, max=69830, avg=12049.22, stdev=7606.81 00:43:20.617 clat percentiles (usec): 00:43:20.617 | 1.00th=[ 3228], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 7898], 00:43:20.617 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:43:20.617 | 70.00th=[11076], 80.00th=[15139], 90.00th=[23200], 95.00th=[25297], 00:43:20.617 | 99.00th=[39060], 99.50th=[48497], 99.90th=[61604], 99.95th=[69731], 00:43:20.617 | 99.99th=[69731] 00:43:20.617 write: IOPS=5377, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1007msec); 0 zone resets 00:43:20.617 slat (nsec): min=1653, max=15297k, avg=84653.50, stdev=683404.82 00:43:20.617 clat (usec): min=1613, max=77522, avg=12083.84, stdev=10802.75 00:43:20.617 lat (usec): min=1621, max=77530, avg=12168.50, stdev=10870.37 00:43:20.617 clat percentiles (usec): 00:43:20.617 | 1.00th=[ 2835], 5.00th=[ 4817], 10.00th=[ 5604], 20.00th=[ 7111], 00:43:20.617 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[10290], 00:43:20.617 | 70.00th=[12780], 80.00th=[14615], 90.00th=[18482], 95.00th=[26870], 00:43:20.617 | 99.00th=[72877], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:43:20.617 | 99.99th=[77071] 00:43:20.617 bw ( KiB/s): min=18634, max=23632, per=21.53%, avg=21133.00, stdev=3534.12, samples=2 00:43:20.617 iops : min= 4658, max= 5908, avg=5283.00, stdev=883.88, samples=2 00:43:20.617 lat (msec) : 2=0.09%, 4=3.08%, 10=59.42%, 20=26.71%, 50=9.26% 00:43:20.617 lat (msec) : 100=1.43% 00:43:20.617 cpu : usr=4.47%, sys=5.37%, ctx=364, majf=0, minf=1 00:43:20.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:20.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.617 issued rwts: total=5120,5415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.617 00:43:20.617 Run status group 0 (all jobs): 00:43:20.617 READ: bw=91.3MiB/s (95.8MB/s), 11.9MiB/s-36.0MiB/s (12.5MB/s-37.7MB/s), io=92.2MiB (96.6MB), run=1005-1009msec 00:43:20.617 WRITE: bw=95.9MiB/s (101MB/s), 12.2MiB/s-37.8MiB/s (12.8MB/s-39.6MB/s), io=96.7MiB (101MB), run=1005-1009msec 00:43:20.617 00:43:20.617 Disk stats (read/write): 00:43:20.617 nvme0n1: ios=7949/8192, merge=0/0, ticks=53414/47767, in_queue=101181, util=89.88% 00:43:20.617 nvme0n2: ios=5165/5340, merge=0/0, ticks=49404/47282, in_queue=96686, util=94.50% 00:43:20.617 nvme0n3: ios=2617/2802, merge=0/0, ticks=37467/53195, in_queue=90662, util=96.64% 00:43:20.617 nvme0n4: ios=3828/4096, merge=0/0, ticks=43900/47925, in_queue=91825, util=96.71% 00:43:20.617 05:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:20.617 [global] 00:43:20.617 thread=1 00:43:20.617 invalidate=1 00:43:20.617 rw=randwrite 00:43:20.617 time_based=1 00:43:20.617 runtime=1 00:43:20.617 ioengine=libaio 00:43:20.617 direct=1 00:43:20.617 bs=4096 00:43:20.617 iodepth=128 00:43:20.617 norandommap=0 00:43:20.617 numjobs=1 00:43:20.617 00:43:20.617 verify_dump=1 00:43:20.617 verify_backlog=512 00:43:20.617 verify_state_save=0 00:43:20.617 do_verify=1 00:43:20.617 verify=crc32c-intel 00:43:20.617 [job0] 00:43:20.617 filename=/dev/nvme0n1 00:43:20.617 [job1] 00:43:20.617 filename=/dev/nvme0n2 00:43:20.617 [job2] 00:43:20.617 filename=/dev/nvme0n3 00:43:20.617 [job3] 00:43:20.617 filename=/dev/nvme0n4 00:43:20.617 Could not set queue depth (nvme0n1) 00:43:20.617 Could not set queue depth (nvme0n2) 00:43:20.617 Could not set queue depth (nvme0n3) 00:43:20.617 Could not set queue depth (nvme0n4) 00:43:20.877 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:20.877 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:20.877 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:20.877 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:20.877 fio-3.35 00:43:20.877 Starting 4 threads 00:43:22.259 00:43:22.259 job0: (groupid=0, jobs=1): err= 0: pid=1917360: Mon Dec 9 05:36:36 2024 00:43:22.259 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:43:22.259 slat (nsec): min=896, max=11013k, avg=63920.23, stdev=455557.64 00:43:22.259 clat (usec): min=3191, max=19164, avg=8204.65, stdev=2124.61 00:43:22.259 lat (usec): min=3196, max=19360, avg=8268.57, stdev=2153.72 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 6718], 00:43:22.259 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:43:22.259 | 70.00th=[ 8356], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[11994], 00:43:22.259 | 99.00th=[16450], 99.50th=[19006], 99.90th=[19006], 99.95th=[19268], 00:43:22.259 | 99.99th=[19268] 00:43:22.259 write: IOPS=8214, BW=32.1MiB/s (33.6MB/s)(32.2MiB/1003msec); 0 zone resets 00:43:22.259 slat (nsec): min=1504, max=6387.3k, avg=54043.38, stdev=318536.82 00:43:22.259 clat (usec): min=1138, max=15436, avg=7292.13, stdev=1686.07 00:43:22.259 lat (usec): min=1158, max=15443, avg=7346.17, stdev=1696.40 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 2966], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 6587], 00:43:22.259 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:43:22.259 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 9241], 95.00th=[10290], 00:43:22.259 | 99.00th=[13698], 99.50th=[13829], 99.90th=[15008], 99.95th=[15008], 00:43:22.259 | 99.99th=[15401] 00:43:22.259 bw ( KiB/s): min=32768, max=32768, per=33.96%, avg=32768.00, stdev= 0.00, samples=2 00:43:22.259 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:43:22.259 lat (msec) : 2=0.10%, 4=1.58%, 10=87.59%, 20=10.72% 00:43:22.259 cpu : usr=3.39%, sys=6.89%, ctx=897, majf=0, minf=1 00:43:22.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:22.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:22.259 issued rwts: total=8192,8239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:22.259 job1: (groupid=0, jobs=1): err= 0: pid=1917361: Mon Dec 9 05:36:36 2024 00:43:22.259 read: IOPS=6204, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1003msec) 00:43:22.259 slat (nsec): min=994, max=8514.0k, avg=87706.95, stdev=596854.58 00:43:22.259 clat (usec): min=2246, max=24813, avg=11110.28, stdev=5045.08 00:43:22.259 lat (usec): min=2765, max=24820, avg=11197.99, stdev=5077.49 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 4555], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6783], 00:43:22.259 | 30.00th=[ 7242], 40.00th=[ 8094], 50.00th=[ 9241], 60.00th=[11207], 00:43:22.259 | 70.00th=[12649], 80.00th=[15795], 90.00th=[19268], 95.00th=[22414], 00:43:22.259 | 99.00th=[23725], 99.50th=[24249], 99.90th=[24773], 99.95th=[24773], 00:43:22.259 | 99.99th=[24773] 00:43:22.259 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:43:22.259 slat (nsec): min=1622, max=6051.9k, avg=63027.18, stdev=400371.88 00:43:22.259 clat (usec): min=1150, max=21935, avg=8698.30, stdev=3606.07 00:43:22.259 lat (usec): min=1161, max=21943, avg=8761.33, stdev=3627.07 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 2278], 5.00th=[ 4228], 10.00th=[ 5080], 20.00th=[ 6325], 00:43:22.259 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 8225], 00:43:22.259 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[14484], 95.00th=[16712], 00:43:22.259 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21890], 99.95th=[21890], 00:43:22.259 | 99.99th=[21890] 00:43:22.259 bw ( KiB/s): min=19216, max=33648, per=27.39%, avg=26432.00, stdev=10204.97, samples=2 00:43:22.259 iops : min= 4804, max= 8412, avg=6608.00, stdev=2551.24, samples=2 00:43:22.259 lat (msec) : 2=0.36%, 4=1.63%, 10=62.75%, 20=31.09%, 50=4.16% 00:43:22.259 cpu : usr=4.39%, sys=6.29%, ctx=520, majf=0, minf=1 00:43:22.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:22.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:22.259 issued rwts: total=6223,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:22.259 job2: (groupid=0, jobs=1): err= 0: pid=1917364: Mon Dec 9 05:36:36 2024 00:43:22.259 read: IOPS=5317, BW=20.8MiB/s (21.8MB/s)(21.6MiB/1042msec) 00:43:22.259 slat (nsec): min=946, max=13371k, avg=84087.36, stdev=550131.49 00:43:22.259 clat (usec): min=4013, max=62999, avg=12578.04, stdev=8054.90 00:43:22.259 lat (usec): min=4023, max=68770, avg=12662.13, stdev=8095.61 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 5473], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8717], 00:43:22.259 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:43:22.259 | 70.00th=[10814], 80.00th=[15270], 90.00th=[20841], 95.00th=[25297], 00:43:22.259 | 99.00th=[54264], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:43:22.259 | 99.99th=[63177] 00:43:22.259 write: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1042msec); 0 zone resets 00:43:22.259 slat (nsec): min=1528, max=8749.2k, avg=79729.93, stdev=462523.62 00:43:22.259 clat (usec): min=3897, max=35585, avg=11075.85, stdev=5339.57 00:43:22.259 lat (usec): min=3952, max=35592, avg=11155.58, stdev=5379.83 00:43:22.259 clat percentiles (usec): 00:43:22.259 | 1.00th=[ 4948], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 8029], 00:43:22.259 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:43:22.259 | 70.00th=[ 9896], 80.00th=[15139], 90.00th=[18220], 95.00th=[23725], 00:43:22.259 | 99.00th=[28967], 99.50th=[30278], 99.90th=[31851], 99.95th=[33817], 00:43:22.259 | 99.99th=[35390] 00:43:22.259 bw ( KiB/s): min=20480, max=24576, per=23.35%, avg=22528.00, stdev=2896.31, samples=2 00:43:22.259 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:43:22.259 lat (msec) : 4=0.07%, 10=64.70%, 20=25.09%, 50=9.39%, 100=0.75% 00:43:22.259 cpu : usr=1.92%, sys=5.28%, ctx=576, majf=0, minf=1 00:43:22.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:22.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:22.259 issued rwts: total=5541,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:22.259 job3: (groupid=0, jobs=1): err= 0: pid=1917366: Mon Dec 9 05:36:36 2024 00:43:22.259 read: IOPS=4243, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1003msec) 00:43:22.259 slat (nsec): min=984, max=7941.8k, avg=107334.26, stdev=620755.20 00:43:22.259 clat (usec): min=1116, max=62041, avg=12819.40, stdev=3587.88 00:43:22.259 lat (usec): min=3278, max=62048, avg=12926.73, stdev=3640.28 00:43:22.260 clat percentiles (usec): 00:43:22.260 | 1.00th=[ 4015], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9896], 00:43:22.260 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12911], 60.00th=[13698], 00:43:22.260 | 70.00th=[14222], 80.00th=[15664], 90.00th=[16909], 95.00th=[18482], 00:43:22.260 | 99.00th=[22676], 99.50th=[27919], 99.90th=[33817], 99.95th=[33817], 00:43:22.260 | 99.99th=[62129] 00:43:22.260 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:43:22.260 slat (nsec): min=1629, max=24460k, avg=113403.37, stdev=790787.12 00:43:22.260 clat (usec): min=4459, max=94071, avg=15403.42, stdev=13606.67 00:43:22.260 lat (usec): min=4481, max=94080, avg=15516.83, stdev=13690.20 00:43:22.260 clat percentiles (usec): 00:43:22.260 | 1.00th=[ 5342], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 9241], 00:43:22.260 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12780], 00:43:22.260 | 70.00th=[13173], 80.00th=[15008], 90.00th=[26084], 95.00th=[40109], 00:43:22.260 | 99.00th=[79168], 99.50th=[84411], 99.90th=[93848], 99.95th=[93848], 00:43:22.260 | 99.99th=[93848] 00:43:22.260 bw ( KiB/s): min=15704, max=21160, per=19.10%, avg=18432.00, stdev=3857.97, samples=2 00:43:22.260 iops : min= 3926, max= 5290, avg=4608.00, stdev=964.49, samples=2 00:43:22.260 lat (msec) : 2=0.01%, 4=0.42%, 10=24.58%, 20=67.68%, 50=4.94% 00:43:22.260 lat (msec) : 100=2.37% 00:43:22.260 cpu : usr=3.29%, sys=3.99%, ctx=436, majf=0, minf=1 00:43:22.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:22.260 issued rwts: total=4256,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:22.260 00:43:22.260 Run status group 0 (all jobs): 00:43:22.260 READ: bw=90.8MiB/s (95.2MB/s), 16.6MiB/s-31.9MiB/s (17.4MB/s-33.5MB/s), io=94.6MiB (99.2MB), run=1003-1042msec 00:43:22.260 WRITE: bw=94.2MiB/s (98.8MB/s), 17.9MiB/s-32.1MiB/s (18.8MB/s-33.6MB/s), io=98.2MiB (103MB), run=1003-1042msec 00:43:22.260 00:43:22.260 Disk stats (read/write): 00:43:22.260 nvme0n1: ios=6706/7030, merge=0/0, ticks=40220/35639, in_queue=75859, util=87.88% 00:43:22.260 nvme0n2: ios=5677/5939, merge=0/0, ticks=40344/35452, in_queue=75796, util=95.32% 00:43:22.260 nvme0n3: ios=4615/4608, merge=0/0, ticks=23345/21220, in_queue=44565, util=97.17% 00:43:22.260 nvme0n4: ios=3605/3640, merge=0/0, ticks=19451/22889, in_queue=42340, util=93.84% 00:43:22.260 05:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:22.260 05:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1917702 00:43:22.260 05:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:22.260 05:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:22.260 [global] 00:43:22.260 thread=1 00:43:22.260 invalidate=1 00:43:22.260 rw=read 00:43:22.260 time_based=1 00:43:22.260 runtime=10 00:43:22.260 ioengine=libaio 00:43:22.260 direct=1 00:43:22.260 bs=4096 00:43:22.260 iodepth=1 00:43:22.260 norandommap=1 00:43:22.260 numjobs=1 00:43:22.260 00:43:22.260 [job0] 00:43:22.260 filename=/dev/nvme0n1 00:43:22.260 [job1] 00:43:22.260 filename=/dev/nvme0n2 00:43:22.260 [job2] 00:43:22.260 filename=/dev/nvme0n3 00:43:22.260 [job3] 00:43:22.260 filename=/dev/nvme0n4 00:43:22.260 Could not set queue depth (nvme0n1) 00:43:22.260 Could not set queue depth (nvme0n2) 00:43:22.260 Could not set queue depth (nvme0n3) 00:43:22.260 Could not set queue depth (nvme0n4) 00:43:22.519 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:22.519 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:22.520 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:22.520 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:22.520 fio-3.35 00:43:22.520 Starting 4 threads 00:43:25.061 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:25.321 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9162752, buflen=4096 00:43:25.321 fio: pid=1917889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:25.321 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:25.581 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:25.581 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7860224, buflen=4096 00:43:25.581 fio: pid=1917888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:25.581 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:25.840 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5255168, buflen=4096 00:43:25.840 fio: pid=1917886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:25.840 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:25.840 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:25.840 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1220608, buflen=4096 00:43:25.840 fio: pid=1917887, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:25.840 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:25.840 05:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:26.100 00:43:26.100 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917886: Mon Dec 9 05:36:39 2024 00:43:26.100 read: IOPS=434, BW=1738KiB/s (1780kB/s)(5132KiB/2953msec) 00:43:26.100 slat (usec): min=4, max=7012, avg=24.96, stdev=262.51 00:43:26.100 clat (usec): min=423, max=42075, avg=2256.27, stdev=7224.04 00:43:26.100 lat (usec): min=429, max=42100, avg=2281.25, stdev=7228.89 00:43:26.100 clat percentiles (usec): 00:43:26.100 | 1.00th=[ 594], 5.00th=[ 701], 10.00th=[ 766], 20.00th=[ 807], 00:43:26.100 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[ 881], 60.00th=[ 955], 00:43:26.100 | 70.00th=[ 1045], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[ 1254], 00:43:26.100 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:26.100 | 99.99th=[42206] 00:43:26.100 bw ( KiB/s): min= 904, max= 1944, per=19.77%, avg=1422.40, stdev=430.87, samples=5 00:43:26.100 iops : min= 226, max= 486, avg=355.60, stdev=107.72, samples=5 00:43:26.100 lat (usec) : 500=0.23%, 750=8.10%, 1000=56.31% 00:43:26.100 lat (msec) : 2=32.01%, 50=3.27% 00:43:26.100 cpu : usr=0.30%, sys=0.81%, ctx=1287, majf=0, minf=1 00:43:26.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:26.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 issued rwts: total=1284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:26.100 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917887: Mon Dec 9 05:36:39 2024 00:43:26.100 read: IOPS=93, BW=374KiB/s (383kB/s)(1192KiB/3190msec) 00:43:26.100 slat (usec): min=4, max=646, avg=14.83, stdev=37.85 00:43:26.100 clat (usec): min=382, max=69364, avg=10615.05, stdev=17500.45 00:43:26.100 lat (usec): min=392, max=70010, avg=10629.90, stdev=17514.45 00:43:26.100 clat percentiles (usec): 00:43:26.100 | 1.00th=[ 449], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 717], 00:43:26.100 | 30.00th=[ 750], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 865], 00:43:26.100 | 70.00th=[ 979], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:26.100 | 99.00th=[41681], 99.50th=[43779], 99.90th=[69731], 99.95th=[69731], 00:43:26.100 | 99.99th=[69731] 00:43:26.100 bw ( KiB/s): min= 96, max= 1036, per=5.23%, avg=376.67, stdev=436.51, samples=6 00:43:26.100 iops : min= 24, max= 259, avg=94.17, stdev=109.13, samples=6 00:43:26.100 lat (usec) : 500=1.34%, 750=28.09%, 1000=41.81% 00:43:26.100 lat (msec) : 2=4.35%, 50=23.75%, 100=0.33% 00:43:26.100 cpu : usr=0.00%, sys=0.22%, ctx=302, majf=0, minf=2 00:43:26.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:26.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 issued rwts: total=299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:26.100 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917888: Mon Dec 9 05:36:39 2024 00:43:26.100 read: IOPS=691, BW=2764KiB/s (2830kB/s)(7676KiB/2777msec) 00:43:26.100 slat (nsec): min=5852, max=78988, avg=19218.73, stdev=9879.37 00:43:26.100 clat (usec): min=467, max=42076, avg=1411.92, stdev=4350.90 00:43:26.100 lat (usec): min=474, max=42103, avg=1431.13, stdev=4352.14 00:43:26.100 clat percentiles (usec): 00:43:26.100 | 1.00th=[ 603], 5.00th=[ 734], 10.00th=[ 791], 20.00th=[ 832], 00:43:26.100 | 30.00th=[ 865], 40.00th=[ 898], 50.00th=[ 955], 60.00th=[ 996], 00:43:26.100 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1172], 00:43:26.100 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:26.100 | 99.99th=[42206] 00:43:26.100 bw ( KiB/s): min= 96, max= 4696, per=40.74%, avg=2931.20, stdev=1899.43, samples=5 00:43:26.100 iops : min= 24, max= 1174, avg=732.80, stdev=474.86, samples=5 00:43:26.100 lat (usec) : 500=0.16%, 750=5.99%, 1000=54.43% 00:43:26.100 lat (msec) : 2=38.23%, 50=1.15% 00:43:26.100 cpu : usr=0.83%, sys=1.98%, ctx=1920, majf=0, minf=2 00:43:26.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:26.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:26.100 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1917889: Mon Dec 9 05:36:39 2024 00:43:26.100 read: IOPS=867, BW=3467KiB/s (3550kB/s)(8948KiB/2581msec) 00:43:26.100 slat (nsec): min=24639, max=63656, avg=27762.84, stdev=3527.16 00:43:26.100 clat (usec): min=719, max=1403, avg=1109.55, stdev=75.19 00:43:26.100 lat (usec): min=746, max=1429, avg=1137.32, stdev=75.14 00:43:26.100 clat percentiles (usec): 00:43:26.100 | 1.00th=[ 889], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1057], 00:43:26.100 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:43:26.100 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:43:26.100 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1336], 00:43:26.100 | 99.99th=[ 1401] 00:43:26.100 bw ( KiB/s): min= 3472, max= 3528, per=48.68%, avg=3502.40, stdev=22.20, samples=5 00:43:26.100 iops : min= 868, max= 882, avg=875.60, stdev= 5.55, samples=5 00:43:26.100 lat (usec) : 750=0.04%, 1000=7.86% 00:43:26.100 lat (msec) : 2=92.05% 00:43:26.100 cpu : usr=1.71%, sys=3.45%, ctx=2239, majf=0, minf=2 00:43:26.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:26.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.100 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:26.100 00:43:26.100 Run status group 0 (all jobs): 00:43:26.100 READ: bw=7194KiB/s (7366kB/s), 374KiB/s-3467KiB/s (383kB/s-3550kB/s), io=22.4MiB (23.5MB), run=2581-3190msec 00:43:26.100 00:43:26.100 Disk stats (read/write): 00:43:26.101 nvme0n1: ios=1234/0, merge=0/0, ticks=2735/0, in_queue=2735, util=93.12% 00:43:26.101 nvme0n2: ios=295/0, merge=0/0, ticks=3039/0, in_queue=3039, util=94.87% 00:43:26.101 nvme0n3: ios=1849/0, merge=0/0, ticks=2396/0, in_queue=2396, util=95.69% 00:43:26.101 nvme0n4: ios=2230/0, merge=0/0, ticks=2224/0, in_queue=2224, util=96.37% 00:43:26.101 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:26.101 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:26.360 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:26.360 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:26.619 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:26.619 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1917702 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:26.879 05:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:27.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:27.447 nvmf hotplug test: fio failed as expected 00:43:27.447 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:27.708 rmmod nvme_tcp 00:43:27.708 rmmod nvme_fabrics 00:43:27.708 rmmod nvme_keyring 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1914495 ']' 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1914495 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1914495 ']' 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1914495 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:27.708 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914495 00:43:27.968 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:27.968 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:27.968 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914495' 00:43:27.968 killing process with pid 1914495 00:43:27.968 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1914495 00:43:27.968 05:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1914495 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:28.539 05:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.452 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:30.452 00:43:30.452 real 0m29.197s 00:43:30.452 user 2m17.357s 00:43:30.452 sys 0m12.553s 00:43:30.452 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:30.452 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.452 ************************************ 00:43:30.452 END TEST nvmf_fio_target 00:43:30.452 ************************************ 00:43:30.452 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:30.713 ************************************ 00:43:30.713 START TEST nvmf_bdevio 00:43:30.713 ************************************ 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:30.713 * Looking for test storage... 00:43:30.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:30.713 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:30.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.714 --rc genhtml_branch_coverage=1 00:43:30.714 --rc genhtml_function_coverage=1 00:43:30.714 --rc genhtml_legend=1 00:43:30.714 --rc geninfo_all_blocks=1 00:43:30.714 --rc geninfo_unexecuted_blocks=1 00:43:30.714 00:43:30.714 ' 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:30.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.714 --rc genhtml_branch_coverage=1 00:43:30.714 --rc genhtml_function_coverage=1 00:43:30.714 --rc genhtml_legend=1 00:43:30.714 --rc geninfo_all_blocks=1 00:43:30.714 --rc geninfo_unexecuted_blocks=1 00:43:30.714 00:43:30.714 ' 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:30.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.714 --rc genhtml_branch_coverage=1 00:43:30.714 --rc genhtml_function_coverage=1 00:43:30.714 --rc genhtml_legend=1 00:43:30.714 --rc geninfo_all_blocks=1 00:43:30.714 --rc geninfo_unexecuted_blocks=1 00:43:30.714 00:43:30.714 ' 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:30.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.714 --rc genhtml_branch_coverage=1 00:43:30.714 --rc genhtml_function_coverage=1 00:43:30.714 --rc genhtml_legend=1 00:43:30.714 --rc geninfo_all_blocks=1 00:43:30.714 --rc geninfo_unexecuted_blocks=1 00:43:30.714 00:43:30.714 ' 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:30.714 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:30.975 05:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:39.126 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:39.127 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:39.127 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:39.127 Found net devices under 0000:31:00.0: cvl_0_0 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:39.127 Found net devices under 0000:31:00.1: cvl_0_1 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:39.127 05:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:39.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:39.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:43:39.127 00:43:39.127 --- 10.0.0.2 ping statistics --- 00:43:39.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:39.127 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:39.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:39.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:43:39.127 00:43:39.127 --- 10.0.0.1 ping statistics --- 00:43:39.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:39.127 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1923225 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1923225 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:39.127 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1923225 ']' 00:43:39.128 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:39.128 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:39.128 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:39.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:39.128 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:39.128 05:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.128 [2024-12-09 05:36:52.414228] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:39.128 [2024-12-09 05:36:52.416549] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:39.128 [2024-12-09 05:36:52.416635] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:39.128 [2024-12-09 05:36:52.565372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:39.128 [2024-12-09 05:36:52.664438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:39.128 [2024-12-09 05:36:52.664480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:39.128 [2024-12-09 05:36:52.664493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:39.128 [2024-12-09 05:36:52.664503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:39.128 [2024-12-09 05:36:52.664514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:39.128 [2024-12-09 05:36:52.666775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:39.128 [2024-12-09 05:36:52.666897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:39.128 [2024-12-09 05:36:52.667241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:39.128 [2024-12-09 05:36:52.667265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:39.128 [2024-12-09 05:36:52.921664] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:39.128 [2024-12-09 05:36:52.922675] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:39.128 [2024-12-09 05:36:52.923340] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:39.128 [2024-12-09 05:36:52.923481] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:39.128 [2024-12-09 05:36:52.923607] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:39.387 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:39.387 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:39.387 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:39.387 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 [2024-12-09 05:36:53.232426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 Malloc0 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:39.388 [2024-12-09 05:36:53.364252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:39.388 { 00:43:39.388 "params": { 00:43:39.388 "name": "Nvme$subsystem", 00:43:39.388 "trtype": "$TEST_TRANSPORT", 00:43:39.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.388 "adrfam": "ipv4", 00:43:39.388 "trsvcid": "$NVMF_PORT", 00:43:39.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.388 "hdgst": ${hdgst:-false}, 00:43:39.388 "ddgst": ${ddgst:-false} 00:43:39.388 }, 00:43:39.388 "method": "bdev_nvme_attach_controller" 00:43:39.388 } 00:43:39.388 EOF 00:43:39.388 )") 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:39.388 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:39.648 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:39.648 05:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:39.648 "params": { 00:43:39.648 "name": "Nvme1", 00:43:39.648 "trtype": "tcp", 00:43:39.648 "traddr": "10.0.0.2", 00:43:39.648 "adrfam": "ipv4", 00:43:39.648 "trsvcid": "4420", 00:43:39.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:39.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:39.648 "hdgst": false, 00:43:39.648 "ddgst": false 00:43:39.648 }, 00:43:39.648 "method": "bdev_nvme_attach_controller" 00:43:39.648 }' 00:43:39.648 [2024-12-09 05:36:53.445980] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:39.648 [2024-12-09 05:36:53.446086] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923301 ] 00:43:39.648 [2024-12-09 05:36:53.589381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:39.907 [2024-12-09 05:36:53.691615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:39.907 [2024-12-09 05:36:53.691713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:39.907 [2024-12-09 05:36:53.691736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:40.167 I/O targets: 00:43:40.168 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:40.168 00:43:40.168 00:43:40.168 CUnit - A unit testing framework for C - Version 2.1-3 00:43:40.168 http://cunit.sourceforge.net/ 00:43:40.168 00:43:40.168 00:43:40.168 Suite: bdevio tests on: Nvme1n1 00:43:40.168 Test: blockdev write read block ...passed 00:43:40.168 Test: blockdev write zeroes read block ...passed 00:43:40.168 Test: blockdev write zeroes read no split ...passed 00:43:40.428 Test: blockdev write zeroes read split ...passed 00:43:40.428 Test: blockdev write zeroes read split partial ...passed 00:43:40.428 Test: blockdev reset ...[2024-12-09 05:36:54.255360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:40.428 [2024-12-09 05:36:54.255527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000394200 (9): Bad file descriptor 00:43:40.428 [2024-12-09 05:36:54.265289] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:40.428 passed 00:43:40.428 Test: blockdev write read 8 blocks ...passed 00:43:40.428 Test: blockdev write read size > 128k ...passed 00:43:40.428 Test: blockdev write read invalid size ...passed 00:43:40.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:40.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:40.428 Test: blockdev write read max offset ...passed 00:43:40.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:40.688 Test: blockdev writev readv 8 blocks ...passed 00:43:40.688 Test: blockdev writev readv 30 x 1block ...passed 00:43:40.688 Test: blockdev writev readv block ...passed 00:43:40.688 Test: blockdev writev readv size > 128k ...passed 00:43:40.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:40.688 Test: blockdev comparev and writev ...[2024-12-09 05:36:54.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.572311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.572336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.572349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.572922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.572944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.572963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.572976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.573566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.573578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.574155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.574181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.574202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:40.688 [2024-12-09 05:36:54.574214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:40.688 passed 00:43:40.688 Test: blockdev nvme passthru rw ...passed 00:43:40.688 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:36:54.658583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:40.688 [2024-12-09 05:36:54.658625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.658922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:40.688 [2024-12-09 05:36:54.658939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.659211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:40.688 [2024-12-09 05:36:54.659229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:40.688 [2024-12-09 05:36:54.659482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:40.688 [2024-12-09 05:36:54.659499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:40.688 passed 00:43:40.688 Test: blockdev nvme admin passthru ...passed 00:43:40.947 Test: blockdev copy ...passed 00:43:40.947 00:43:40.947 Run Summary: Type Total Ran Passed Failed Inactive 00:43:40.947 suites 1 1 n/a 0 0 00:43:40.947 tests 23 23 23 0 0 00:43:40.947 asserts 152 152 152 0 n/a 00:43:40.947 00:43:40.947 Elapsed time = 1.471 seconds 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:41.516 rmmod nvme_tcp 00:43:41.516 rmmod nvme_fabrics 00:43:41.516 rmmod nvme_keyring 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1923225 ']' 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1923225 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1923225 ']' 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1923225 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:41.516 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923225 00:43:41.774 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:41.774 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:41.774 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923225' 00:43:41.774 killing process with pid 1923225 00:43:41.774 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1923225 00:43:41.774 05:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1923225 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:42.713 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:42.714 05:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.632 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:44.632 00:43:44.632 real 0m14.052s 00:43:44.632 user 0m15.972s 00:43:44.632 sys 0m6.914s 00:43:44.632 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:44.632 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:44.632 ************************************ 00:43:44.632 END TEST nvmf_bdevio 00:43:44.632 ************************************ 00:43:44.633 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:44.633 00:43:44.633 real 5m11.794s 00:43:44.633 user 10m47.659s 00:43:44.633 sys 2m6.119s 00:43:44.633 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:44.633 05:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:44.633 ************************************ 00:43:44.633 END TEST nvmf_target_core_interrupt_mode 00:43:44.633 ************************************ 00:43:44.633 05:36:58 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:44.633 05:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:44.633 05:36:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:44.633 05:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:44.894 ************************************ 00:43:44.894 START TEST nvmf_interrupt 00:43:44.894 ************************************ 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:44.894 * Looking for test storage... 00:43:44.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.894 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:44.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.894 --rc genhtml_branch_coverage=1 00:43:44.894 --rc genhtml_function_coverage=1 00:43:44.894 --rc genhtml_legend=1 00:43:44.894 --rc geninfo_all_blocks=1 00:43:44.895 --rc geninfo_unexecuted_blocks=1 00:43:44.895 00:43:44.895 ' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.895 --rc genhtml_branch_coverage=1 00:43:44.895 --rc genhtml_function_coverage=1 00:43:44.895 --rc genhtml_legend=1 00:43:44.895 --rc geninfo_all_blocks=1 00:43:44.895 --rc geninfo_unexecuted_blocks=1 00:43:44.895 00:43:44.895 ' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.895 --rc genhtml_branch_coverage=1 00:43:44.895 --rc genhtml_function_coverage=1 00:43:44.895 --rc genhtml_legend=1 00:43:44.895 --rc geninfo_all_blocks=1 00:43:44.895 --rc geninfo_unexecuted_blocks=1 00:43:44.895 00:43:44.895 ' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:44.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.895 --rc genhtml_branch_coverage=1 00:43:44.895 --rc genhtml_function_coverage=1 00:43:44.895 --rc genhtml_legend=1 00:43:44.895 --rc geninfo_all_blocks=1 00:43:44.895 --rc geninfo_unexecuted_blocks=1 00:43:44.895 00:43:44.895 ' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.895 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:45.156 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:45.156 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:45.156 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:45.156 05:36:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:45.156 05:36:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:53.296 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:53.296 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:53.296 Found net devices under 0000:31:00.0: cvl_0_0 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:53.296 Found net devices under 0000:31:00.1: cvl_0_1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:53.296 05:37:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:53.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:53.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:43:53.296 00:43:53.296 --- 10.0.0.2 ping statistics --- 00:43:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.296 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:53.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:53.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:43:53.296 00:43:53.296 --- 10.0.0.1 ping statistics --- 00:43:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.296 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:53.296 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1928021 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1928021 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1928021 ']' 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:53.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:53.297 05:37:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 [2024-12-09 05:37:06.280534] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:53.297 [2024-12-09 05:37:06.282824] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:53.297 [2024-12-09 05:37:06.282908] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:53.297 [2024-12-09 05:37:06.432125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:53.297 [2024-12-09 05:37:06.530694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:53.297 [2024-12-09 05:37:06.530735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:53.297 [2024-12-09 05:37:06.530751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:53.297 [2024-12-09 05:37:06.530760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:53.297 [2024-12-09 05:37:06.530772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:53.297 [2024-12-09 05:37:06.532727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.297 [2024-12-09 05:37:06.532750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.297 [2024-12-09 05:37:06.776322] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:53.297 [2024-12-09 05:37:06.776452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:53.297 [2024-12-09 05:37:06.776607] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:53.297 5000+0 records in 00:43:53.297 5000+0 records out 00:43:53.297 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181249 s, 565 MB/s 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 AIO0 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 [2024-12-09 05:37:07.173929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:53.297 [2024-12-09 05:37:07.222615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1928021 0 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 0 idle 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:43:53.297 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928021 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.61 reactor_0' 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928021 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.61 reactor_0 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1928021 1 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 1 idle 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:43:53.558 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928032 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.00 reactor_1' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928032 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.00 reactor_1 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1928384 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1928021 0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1928021 0 busy 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928021 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.62 reactor_0' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928021 root 20 0 20.1t 208512 99072 S 0.0 0.2 0:00.62 reactor_0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:53.818 05:37:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928021 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:02.85 reactor_0' 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928021 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:02.85 reactor_0 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1928021 1 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1928021 1 busy 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:43:55.202 05:37:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928032 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:01.31 reactor_1' 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928032 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:01.31 reactor_1 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:55.202 05:37:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1928384 00:44:05.197 Initializing NVMe Controllers 00:44:05.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:05.197 Controller IO queue size 256, less than required. 00:44:05.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:05.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:05.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:05.197 Initialization complete. Launching workers. 00:44:05.197 ======================================================== 00:44:05.197 Latency(us) 00:44:05.197 Device Information : IOPS MiB/s Average min max 00:44:05.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19538.49 76.32 13106.29 5465.15 35308.66 00:44:05.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18714.69 73.10 13680.43 7426.77 30959.90 00:44:05.197 ======================================================== 00:44:05.197 Total : 38253.17 149.43 13387.18 5465.15 35308.66 00:44:05.197 00:44:05.197 [2024-12-09 05:37:17.902738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(6) to be set 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1928021 0 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 0 idle 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:05.197 05:37:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928021 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:20.60 reactor_0' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928021 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:20.60 reactor_0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1928021 1 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 1 idle 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928032 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:10.00 reactor_1' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928032 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:10.00 reactor_1 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.197 05:37:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:05.458 05:37:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:05.458 05:37:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:44:05.458 05:37:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:05.458 05:37:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:44:05.458 05:37:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:44:07.368 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:07.368 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1928021 0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 0 idle 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928021 root 20 0 20.1t 293760 125568 S 0.0 0.2 0:21.44 reactor_0' 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928021 root 20 0 20.1t 293760 125568 S 0.0 0.2 0:21.44 reactor_0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1928021 1 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1928021 1 idle 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1928021 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1928021 -w 256 00:44:07.629 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1928032 root 20 0 20.1t 293760 125568 S 0.0 0.2 0:10.37 reactor_1' 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1928032 root 20 0 20.1t 293760 125568 S 0.0 0.2 0:10.37 reactor_1 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:07.890 05:37:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:08.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:08.152 05:37:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:08.152 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:08.414 rmmod nvme_tcp 00:44:08.414 rmmod nvme_fabrics 00:44:08.414 rmmod nvme_keyring 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1928021 ']' 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1928021 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1928021 ']' 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1928021 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1928021 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1928021' 00:44:08.414 killing process with pid 1928021 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1928021 00:44:08.414 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1928021 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:08.985 05:37:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:11.530 05:37:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:11.530 00:44:11.530 real 0m26.352s 00:44:11.530 user 0m41.938s 00:44:11.530 sys 0m9.797s 00:44:11.530 05:37:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.530 05:37:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:11.530 ************************************ 00:44:11.530 END TEST nvmf_interrupt 00:44:11.530 ************************************ 00:44:11.530 00:44:11.530 real 38m31.904s 00:44:11.530 user 91m43.085s 00:44:11.530 sys 11m18.062s 00:44:11.530 05:37:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.530 05:37:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.530 ************************************ 00:44:11.530 END TEST nvmf_tcp 00:44:11.530 ************************************ 00:44:11.530 05:37:25 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:44:11.530 05:37:25 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:11.530 05:37:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:11.530 05:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.530 05:37:25 -- common/autotest_common.sh@10 -- # set +x 00:44:11.530 ************************************ 00:44:11.530 START TEST spdkcli_nvmf_tcp 00:44:11.530 ************************************ 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:11.530 * Looking for test storage... 00:44:11.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:11.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.530 --rc genhtml_branch_coverage=1 00:44:11.530 --rc genhtml_function_coverage=1 00:44:11.530 --rc genhtml_legend=1 00:44:11.530 --rc geninfo_all_blocks=1 00:44:11.530 --rc geninfo_unexecuted_blocks=1 00:44:11.530 00:44:11.530 ' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:11.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.530 --rc genhtml_branch_coverage=1 00:44:11.530 --rc genhtml_function_coverage=1 00:44:11.530 --rc genhtml_legend=1 00:44:11.530 --rc geninfo_all_blocks=1 00:44:11.530 --rc geninfo_unexecuted_blocks=1 00:44:11.530 00:44:11.530 ' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:11.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.530 --rc genhtml_branch_coverage=1 00:44:11.530 --rc genhtml_function_coverage=1 00:44:11.530 --rc genhtml_legend=1 00:44:11.530 --rc geninfo_all_blocks=1 00:44:11.530 --rc geninfo_unexecuted_blocks=1 00:44:11.530 00:44:11.530 ' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:11.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.530 --rc genhtml_branch_coverage=1 00:44:11.530 --rc genhtml_function_coverage=1 00:44:11.530 --rc genhtml_legend=1 00:44:11.530 --rc geninfo_all_blocks=1 00:44:11.530 --rc geninfo_unexecuted_blocks=1 00:44:11.530 00:44:11.530 ' 00:44:11.530 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:11.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1931883 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1931883 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1931883 ']' 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:11.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:11.531 05:37:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.531 [2024-12-09 05:37:25.446566] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:11.531 [2024-12-09 05:37:25.446673] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931883 ] 00:44:11.791 [2024-12-09 05:37:25.588190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:11.791 [2024-12-09 05:37:25.687267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:11.791 [2024-12-09 05:37:25.687288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:12.362 05:37:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:12.362 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:12.362 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:12.362 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:12.362 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:12.362 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:12.362 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:12.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:12.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:12.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:12.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:12.362 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:12.362 ' 00:44:15.663 [2024-12-09 05:37:29.111852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:16.604 [2024-12-09 05:37:30.476095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:19.147 [2024-12-09 05:37:32.991178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:21.836 [2024-12-09 05:37:35.217511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:23.219 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:23.219 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:23.219 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:23.219 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:23.219 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:23.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:23.219 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:23.219 05:37:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:23.219 05:37:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:23.219 05:37:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.219 05:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:23.219 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.219 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.219 05:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:23.219 05:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:23.480 05:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.740 05:37:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:23.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:23.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:23.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:23.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:23.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:23.740 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:23.740 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:23.740 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:23.740 ' 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:30.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:30.322 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:30.322 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:30.322 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1931883 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1931883 ']' 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1931883 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1931883 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1931883' 00:44:30.322 killing process with pid 1931883 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1931883 00:44:30.322 05:37:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1931883 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1931883 ']' 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1931883 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1931883 ']' 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1931883 00:44:30.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1931883) - No such process 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1931883 is not found' 00:44:30.322 Process with pid 1931883 is not found 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:30.322 00:44:30.322 real 0m18.963s 00:44:30.322 user 0m41.557s 00:44:30.322 sys 0m1.032s 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:30.322 05:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:30.322 ************************************ 00:44:30.322 END TEST spdkcli_nvmf_tcp 00:44:30.322 ************************************ 00:44:30.322 05:37:44 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:30.322 05:37:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:30.322 05:37:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:30.322 05:37:44 -- common/autotest_common.sh@10 -- # set +x 00:44:30.322 ************************************ 00:44:30.322 START TEST nvmf_identify_passthru 00:44:30.322 ************************************ 00:44:30.322 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:30.322 * Looking for test storage... 00:44:30.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:30.322 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:30.322 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:30.322 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:30.584 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:30.584 05:37:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:30.584 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:30.584 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:30.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.584 --rc genhtml_branch_coverage=1 00:44:30.584 --rc genhtml_function_coverage=1 00:44:30.584 --rc genhtml_legend=1 00:44:30.584 --rc geninfo_all_blocks=1 00:44:30.584 --rc geninfo_unexecuted_blocks=1 00:44:30.584 00:44:30.584 ' 00:44:30.584 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:30.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.584 --rc genhtml_branch_coverage=1 00:44:30.584 --rc genhtml_function_coverage=1 00:44:30.584 --rc genhtml_legend=1 00:44:30.584 --rc geninfo_all_blocks=1 00:44:30.584 --rc geninfo_unexecuted_blocks=1 00:44:30.585 00:44:30.585 ' 00:44:30.585 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:30.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.585 --rc genhtml_branch_coverage=1 00:44:30.585 --rc genhtml_function_coverage=1 00:44:30.585 --rc genhtml_legend=1 00:44:30.585 --rc geninfo_all_blocks=1 00:44:30.585 --rc geninfo_unexecuted_blocks=1 00:44:30.585 00:44:30.585 ' 00:44:30.585 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:30.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.585 --rc genhtml_branch_coverage=1 00:44:30.585 --rc genhtml_function_coverage=1 00:44:30.585 --rc genhtml_legend=1 00:44:30.585 --rc geninfo_all_blocks=1 00:44:30.585 --rc geninfo_unexecuted_blocks=1 00:44:30.585 00:44:30.585 ' 00:44:30.585 05:37:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:30.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:30.585 05:37:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:30.585 05:37:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.585 05:37:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:30.585 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:30.585 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:30.585 05:37:44 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:30.585 05:37:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:38.780 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:38.780 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:38.780 Found net devices under 0000:31:00.0: cvl_0_0 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:38.780 Found net devices under 0000:31:00.1: cvl_0_1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:38.780 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:38.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:38.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:44:38.781 00:44:38.781 --- 10.0.0.2 ping statistics --- 00:44:38.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.781 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:38.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:38.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:44:38.781 00:44:38.781 --- 10.0.0.1 ping statistics --- 00:44:38.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.781 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:38.781 05:37:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:44:38.781 05:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:38.781 05:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:38.781 05:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:44:38.781 05:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:38.781 05:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:38.781 05:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1939364 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:39.354 05:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1939364 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1939364 ']' 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:39.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:39.354 05:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:39.354 [2024-12-09 05:37:53.286930] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:39.354 [2024-12-09 05:37:53.287075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:39.615 [2024-12-09 05:37:53.451482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:39.615 [2024-12-09 05:37:53.577053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:39.615 [2024-12-09 05:37:53.577120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:39.615 [2024-12-09 05:37:53.577133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:39.615 [2024-12-09 05:37:53.577146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:39.615 [2024-12-09 05:37:53.577155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:39.615 [2024-12-09 05:37:53.580028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:39.615 [2024-12-09 05:37:53.580160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:39.615 [2024-12-09 05:37:53.580266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.615 [2024-12-09 05:37:53.580292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:40.186 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.186 INFO: Log level set to 20 00:44:40.186 INFO: Requests: 00:44:40.186 { 00:44:40.186 "jsonrpc": "2.0", 00:44:40.186 "method": "nvmf_set_config", 00:44:40.186 "id": 1, 00:44:40.186 "params": { 00:44:40.186 "admin_cmd_passthru": { 00:44:40.186 "identify_ctrlr": true 00:44:40.186 } 00:44:40.186 } 00:44:40.186 } 00:44:40.186 00:44:40.186 INFO: response: 00:44:40.186 { 00:44:40.186 "jsonrpc": "2.0", 00:44:40.186 "id": 1, 00:44:40.186 "result": true 00:44:40.186 } 00:44:40.186 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.186 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.186 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.186 INFO: Setting log level to 20 00:44:40.186 INFO: Setting log level to 20 00:44:40.186 INFO: Log level set to 20 00:44:40.186 INFO: Log level set to 20 00:44:40.186 INFO: Requests: 00:44:40.186 { 00:44:40.186 "jsonrpc": "2.0", 00:44:40.186 "method": "framework_start_init", 00:44:40.186 "id": 1 00:44:40.186 } 00:44:40.186 00:44:40.186 INFO: Requests: 00:44:40.186 { 00:44:40.186 "jsonrpc": "2.0", 00:44:40.186 "method": "framework_start_init", 00:44:40.186 "id": 1 00:44:40.186 } 00:44:40.186 00:44:40.448 [2024-12-09 05:37:54.320095] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:40.448 INFO: response: 00:44:40.448 { 00:44:40.448 "jsonrpc": "2.0", 00:44:40.448 "id": 1, 00:44:40.448 "result": true 00:44:40.448 } 00:44:40.448 00:44:40.448 INFO: response: 00:44:40.448 { 00:44:40.448 "jsonrpc": "2.0", 00:44:40.448 "id": 1, 00:44:40.448 "result": true 00:44:40.448 } 00:44:40.448 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.448 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.448 INFO: Setting log level to 40 00:44:40.448 INFO: Setting log level to 40 00:44:40.448 INFO: Setting log level to 40 00:44:40.448 [2024-12-09 05:37:54.335622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.448 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.448 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:40.448 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.449 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.021 Nvme0n1 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.021 [2024-12-09 05:37:54.770510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.021 [ 00:44:41.021 { 00:44:41.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:41.021 "subtype": "Discovery", 00:44:41.021 "listen_addresses": [], 00:44:41.021 "allow_any_host": true, 00:44:41.021 "hosts": [] 00:44:41.021 }, 00:44:41.021 { 00:44:41.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:41.021 "subtype": "NVMe", 00:44:41.021 "listen_addresses": [ 00:44:41.021 { 00:44:41.021 "trtype": "TCP", 00:44:41.021 "adrfam": "IPv4", 00:44:41.021 "traddr": "10.0.0.2", 00:44:41.021 "trsvcid": "4420" 00:44:41.021 } 00:44:41.021 ], 00:44:41.021 "allow_any_host": true, 00:44:41.021 "hosts": [], 00:44:41.021 "serial_number": "SPDK00000000000001", 00:44:41.021 "model_number": "SPDK bdev Controller", 00:44:41.021 "max_namespaces": 1, 00:44:41.021 "min_cntlid": 1, 00:44:41.021 "max_cntlid": 65519, 00:44:41.021 "namespaces": [ 00:44:41.021 { 00:44:41.021 "nsid": 1, 00:44:41.021 "bdev_name": "Nvme0n1", 00:44:41.021 "name": "Nvme0n1", 00:44:41.021 "nguid": "36344730526055000025384500000031", 00:44:41.021 "uuid": "36344730-5260-5500-0025-384500000031" 00:44:41.021 } 00:44:41.021 ] 00:44:41.021 } 00:44:41.021 ] 00:44:41.021 05:37:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:41.021 05:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:41.021 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:44:41.282 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:41.282 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:41.282 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:41.542 05:37:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:41.542 rmmod nvme_tcp 00:44:41.542 rmmod nvme_fabrics 00:44:41.542 rmmod nvme_keyring 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1939364 ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1939364 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1939364 ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1939364 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.542 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1939364 00:44:41.803 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:41.803 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:41.803 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1939364' 00:44:41.803 killing process with pid 1939364 00:44:41.803 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1939364 00:44:41.803 05:37:55 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1939364 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:42.745 05:37:56 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:42.745 05:37:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:42.745 05:37:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:44.652 05:37:58 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:44.652 00:44:44.652 real 0m14.409s 00:44:44.652 user 0m13.188s 00:44:44.652 sys 0m6.960s 00:44:44.652 05:37:58 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:44.652 05:37:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:44.652 ************************************ 00:44:44.652 END TEST nvmf_identify_passthru 00:44:44.652 ************************************ 00:44:44.652 05:37:58 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:44.652 05:37:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:44.652 05:37:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:44.652 05:37:58 -- common/autotest_common.sh@10 -- # set +x 00:44:44.913 ************************************ 00:44:44.913 START TEST nvmf_dif 00:44:44.913 ************************************ 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:44.913 * Looking for test storage... 00:44:44.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:44.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.913 --rc genhtml_branch_coverage=1 00:44:44.913 --rc genhtml_function_coverage=1 00:44:44.913 --rc genhtml_legend=1 00:44:44.913 --rc geninfo_all_blocks=1 00:44:44.913 --rc geninfo_unexecuted_blocks=1 00:44:44.913 00:44:44.913 ' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:44.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.913 --rc genhtml_branch_coverage=1 00:44:44.913 --rc genhtml_function_coverage=1 00:44:44.913 --rc genhtml_legend=1 00:44:44.913 --rc geninfo_all_blocks=1 00:44:44.913 --rc geninfo_unexecuted_blocks=1 00:44:44.913 00:44:44.913 ' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:44.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.913 --rc genhtml_branch_coverage=1 00:44:44.913 --rc genhtml_function_coverage=1 00:44:44.913 --rc genhtml_legend=1 00:44:44.913 --rc geninfo_all_blocks=1 00:44:44.913 --rc geninfo_unexecuted_blocks=1 00:44:44.913 00:44:44.913 ' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:44.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.913 --rc genhtml_branch_coverage=1 00:44:44.913 --rc genhtml_function_coverage=1 00:44:44.913 --rc genhtml_legend=1 00:44:44.913 --rc geninfo_all_blocks=1 00:44:44.913 --rc geninfo_unexecuted_blocks=1 00:44:44.913 00:44:44.913 ' 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:44.913 05:37:58 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:44.913 05:37:58 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.913 05:37:58 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.913 05:37:58 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.913 05:37:58 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:44.913 05:37:58 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:44.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:44.913 05:37:58 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:44.913 05:37:58 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:44.913 05:37:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:53.041 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:53.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:53.041 Found net devices under 0000:31:00.0: cvl_0_0 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:53.041 05:38:05 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:53.042 Found net devices under 0000:31:00.1: cvl_0_1 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:53.042 05:38:05 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:53.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:53.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:44:53.042 00:44:53.042 --- 10.0.0.2 ping statistics --- 00:44:53.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:53.042 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:53.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:53.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:44:53.042 00:44:53.042 --- 10.0.0.1 ping statistics --- 00:44:53.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:53.042 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:53.042 05:38:06 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:56.342 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:56.342 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:56.342 05:38:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:56.342 05:38:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1945602 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1945602 00:44:56.342 05:38:10 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1945602 ']' 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:56.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:56.342 05:38:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:56.342 [2024-12-09 05:38:10.237013] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:56.342 [2024-12-09 05:38:10.237138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:56.602 [2024-12-09 05:38:10.402987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.602 [2024-12-09 05:38:10.523769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:56.602 [2024-12-09 05:38:10.523851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:56.602 [2024-12-09 05:38:10.523865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:56.602 [2024-12-09 05:38:10.523879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:56.602 [2024-12-09 05:38:10.523891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:56.602 [2024-12-09 05:38:10.525405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:57.203 05:38:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 05:38:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:57.203 05:38:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:57.203 05:38:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 [2024-12-09 05:38:11.072576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.203 05:38:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 ************************************ 00:44:57.203 START TEST fio_dif_1_default 00:44:57.203 ************************************ 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 bdev_null0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:57.203 [2024-12-09 05:38:11.160967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:57.203 { 00:44:57.203 "params": { 00:44:57.203 "name": "Nvme$subsystem", 00:44:57.203 "trtype": "$TEST_TRANSPORT", 00:44:57.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:57.203 "adrfam": "ipv4", 00:44:57.203 "trsvcid": "$NVMF_PORT", 00:44:57.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:57.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:57.203 "hdgst": ${hdgst:-false}, 00:44:57.203 "ddgst": ${ddgst:-false} 00:44:57.203 }, 00:44:57.203 "method": "bdev_nvme_attach_controller" 00:44:57.203 } 00:44:57.203 EOF 00:44:57.203 )") 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:57.203 05:38:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:57.203 "params": { 00:44:57.203 "name": "Nvme0", 00:44:57.203 "trtype": "tcp", 00:44:57.203 "traddr": "10.0.0.2", 00:44:57.203 "adrfam": "ipv4", 00:44:57.203 "trsvcid": "4420", 00:44:57.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:57.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:57.203 "hdgst": false, 00:44:57.203 "ddgst": false 00:44:57.203 }, 00:44:57.203 "method": "bdev_nvme_attach_controller" 00:44:57.203 }' 00:44:57.464 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:57.464 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:57.464 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:44:57.464 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:57.464 05:38:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.724 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:57.724 fio-3.35 00:44:57.724 Starting 1 thread 00:45:09.995 00:45:09.995 filename0: (groupid=0, jobs=1): err= 0: pid=1946129: Mon Dec 9 05:38:22 2024 00:45:09.995 read: IOPS=194, BW=776KiB/s (795kB/s)(7776KiB/10019msec) 00:45:09.995 slat (nsec): min=5999, max=47135, avg=7799.78, stdev=2513.67 00:45:09.995 clat (usec): min=733, max=42935, avg=20590.86, stdev=20175.15 00:45:09.995 lat (usec): min=739, max=42948, avg=20598.66, stdev=20174.82 00:45:09.995 clat percentiles (usec): 00:45:09.995 | 1.00th=[ 807], 5.00th=[ 848], 10.00th=[ 865], 20.00th=[ 881], 00:45:09.995 | 30.00th=[ 906], 40.00th=[ 955], 50.00th=[ 1123], 60.00th=[41157], 00:45:09.995 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:45:09.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:45:09.995 | 99.99th=[42730] 00:45:09.995 bw ( KiB/s): min= 672, max= 1088, per=99.98%, avg=776.00, stdev=85.53, samples=20 00:45:09.995 iops : min= 168, max= 272, avg=194.00, stdev=21.38, samples=20 00:45:09.995 lat (usec) : 750=0.21%, 1000=44.24% 00:45:09.995 lat (msec) : 2=6.79%, 50=48.77% 00:45:09.995 cpu : usr=94.19%, sys=5.55%, ctx=13, majf=0, minf=1633 00:45:09.995 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.995 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.995 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:09.995 00:45:09.995 Run status group 0 (all jobs): 00:45:09.995 READ: bw=776KiB/s (795kB/s), 776KiB/s-776KiB/s (795kB/s-795kB/s), io=7776KiB (7963kB), run=10019-10019msec 00:45:09.995 ----------------------------------------------------- 00:45:09.995 Suppressions used: 00:45:09.995 count bytes template 00:45:09.995 1 8 /usr/src/fio/parse.c 00:45:09.995 1 8 libtcmalloc_minimal.so 00:45:09.995 1 904 libcrypto.so 00:45:09.995 ----------------------------------------------------- 00:45:09.995 00:45:09.995 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:09.995 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:09.995 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 00:45:09.996 real 0m12.350s 00:45:09.996 user 0m19.670s 00:45:09.996 sys 0m1.217s 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 ************************************ 00:45:09.996 END TEST fio_dif_1_default 00:45:09.996 ************************************ 00:45:09.996 05:38:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:09.996 05:38:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:09.996 05:38:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 ************************************ 00:45:09.996 START TEST fio_dif_1_multi_subsystems 00:45:09.996 ************************************ 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 bdev_null0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 [2024-12-09 05:38:23.590712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 bdev_null1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:09.996 { 00:45:09.996 "params": { 00:45:09.996 "name": "Nvme$subsystem", 00:45:09.996 "trtype": "$TEST_TRANSPORT", 00:45:09.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.996 "adrfam": "ipv4", 00:45:09.996 "trsvcid": "$NVMF_PORT", 00:45:09.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.996 "hdgst": ${hdgst:-false}, 00:45:09.996 "ddgst": ${ddgst:-false} 00:45:09.996 }, 00:45:09.996 "method": "bdev_nvme_attach_controller" 00:45:09.996 } 00:45:09.996 EOF 00:45:09.996 )") 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:09.996 { 00:45:09.996 "params": { 00:45:09.996 "name": "Nvme$subsystem", 00:45:09.996 "trtype": "$TEST_TRANSPORT", 00:45:09.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.996 "adrfam": "ipv4", 00:45:09.996 "trsvcid": "$NVMF_PORT", 00:45:09.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.996 "hdgst": ${hdgst:-false}, 00:45:09.996 "ddgst": ${ddgst:-false} 00:45:09.996 }, 00:45:09.996 "method": "bdev_nvme_attach_controller" 00:45:09.996 } 00:45:09.996 EOF 00:45:09.996 )") 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:45:09.996 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:09.996 "params": { 00:45:09.996 "name": "Nvme0", 00:45:09.996 "trtype": "tcp", 00:45:09.996 "traddr": "10.0.0.2", 00:45:09.997 "adrfam": "ipv4", 00:45:09.997 "trsvcid": "4420", 00:45:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:09.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:09.997 "hdgst": false, 00:45:09.997 "ddgst": false 00:45:09.997 }, 00:45:09.997 "method": "bdev_nvme_attach_controller" 00:45:09.997 },{ 00:45:09.997 "params": { 00:45:09.997 "name": "Nvme1", 00:45:09.997 "trtype": "tcp", 00:45:09.997 "traddr": "10.0.0.2", 00:45:09.997 "adrfam": "ipv4", 00:45:09.997 "trsvcid": "4420", 00:45:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:09.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:09.997 "hdgst": false, 00:45:09.997 "ddgst": false 00:45:09.997 }, 00:45:09.997 "method": "bdev_nvme_attach_controller" 00:45:09.997 }' 00:45:09.997 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:09.997 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:09.997 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:45:09.997 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:09.997 05:38:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:10.264 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:10.264 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:10.264 fio-3.35 00:45:10.264 Starting 2 threads 00:45:22.486 00:45:22.486 filename0: (groupid=0, jobs=1): err= 0: pid=1948644: Mon Dec 9 05:38:35 2024 00:45:22.486 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:45:22.486 slat (nsec): min=6085, max=51791, avg=7865.29, stdev=2801.50 00:45:22.486 clat (usec): min=40718, max=44849, avg=41058.01, stdev=349.65 00:45:22.486 lat (usec): min=40724, max=44901, avg=41065.88, stdev=350.48 00:45:22.486 clat percentiles (usec): 00:45:22.486 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:45:22.486 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:22.486 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:45:22.486 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:45:22.486 | 99.99th=[44827] 00:45:22.486 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=388.80, stdev=11.72, samples=20 00:45:22.486 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:22.486 lat (msec) : 50=100.00% 00:45:22.486 cpu : usr=96.16%, sys=3.59%, ctx=12, majf=0, minf=1631 00:45:22.486 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:22.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:22.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:22.486 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:22.486 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:22.486 filename1: (groupid=0, jobs=1): err= 0: pid=1948645: Mon Dec 9 05:38:35 2024 00:45:22.486 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10009msec) 00:45:22.486 slat (nsec): min=6026, max=51867, avg=7802.01, stdev=2387.61 00:45:22.486 clat (usec): min=492, max=45215, avg=21092.27, stdev=20188.77 00:45:22.486 lat (usec): min=501, max=45267, avg=21100.07, stdev=20188.44 00:45:22.486 clat percentiles (usec): 00:45:22.486 | 1.00th=[ 660], 5.00th=[ 717], 10.00th=[ 750], 20.00th=[ 799], 00:45:22.486 | 30.00th=[ 857], 40.00th=[ 898], 50.00th=[40633], 60.00th=[41157], 00:45:22.486 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:22.486 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:45:22.486 | 99.99th=[45351] 00:45:22.486 bw ( KiB/s): min= 673, max= 768, per=65.97%, avg=756.85, stdev=27.84, samples=20 00:45:22.486 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:45:22.486 lat (usec) : 500=0.05%, 750=10.71%, 1000=37.39% 00:45:22.486 lat (msec) : 2=1.64%, 50=50.21% 00:45:22.486 cpu : usr=95.83%, sys=3.93%, ctx=12, majf=0, minf=1634 00:45:22.486 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:22.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:22.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:22.486 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:22.486 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:22.486 00:45:22.486 Run status group 0 (all jobs): 00:45:22.486 READ: bw=1146KiB/s (1174kB/s), 389KiB/s-758KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10009-10024msec 00:45:22.486 ----------------------------------------------------- 00:45:22.486 Suppressions used: 00:45:22.486 count bytes template 00:45:22.486 2 16 /usr/src/fio/parse.c 00:45:22.486 1 8 libtcmalloc_minimal.so 00:45:22.486 1 904 libcrypto.so 00:45:22.486 ----------------------------------------------------- 00:45:22.486 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.486 00:45:22.486 real 0m12.614s 00:45:22.486 user 0m38.285s 00:45:22.486 sys 0m1.455s 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 ************************************ 00:45:22.486 END TEST fio_dif_1_multi_subsystems 00:45:22.486 ************************************ 00:45:22.486 05:38:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:22.486 05:38:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:22.486 05:38:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:22.486 05:38:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:22.486 ************************************ 00:45:22.486 START TEST fio_dif_rand_params 00:45:22.486 ************************************ 00:45:22.486 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.487 bdev_null0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.487 [2024-12-09 05:38:36.288292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:22.487 { 00:45:22.487 "params": { 00:45:22.487 "name": "Nvme$subsystem", 00:45:22.487 "trtype": "$TEST_TRANSPORT", 00:45:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:22.487 "adrfam": "ipv4", 00:45:22.487 "trsvcid": "$NVMF_PORT", 00:45:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:22.487 "hdgst": ${hdgst:-false}, 00:45:22.487 "ddgst": ${ddgst:-false} 00:45:22.487 }, 00:45:22.487 "method": "bdev_nvme_attach_controller" 00:45:22.487 } 00:45:22.487 EOF 00:45:22.487 )") 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:22.487 "params": { 00:45:22.487 "name": "Nvme0", 00:45:22.487 "trtype": "tcp", 00:45:22.487 "traddr": "10.0.0.2", 00:45:22.487 "adrfam": "ipv4", 00:45:22.487 "trsvcid": "4420", 00:45:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:22.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:22.487 "hdgst": false, 00:45:22.487 "ddgst": false 00:45:22.487 }, 00:45:22.487 "method": "bdev_nvme_attach_controller" 00:45:22.487 }' 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:22.487 05:38:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:23.057 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:23.057 ... 00:45:23.057 fio-3.35 00:45:23.057 Starting 3 threads 00:45:29.640 00:45:29.640 filename0: (groupid=0, jobs=1): err= 0: pid=1951156: Mon Dec 9 05:38:42 2024 00:45:29.640 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(178MiB/5046msec) 00:45:29.640 slat (nsec): min=8934, max=50405, avg=13550.73, stdev=2069.88 00:45:29.640 clat (usec): min=5758, max=55912, avg=10613.53, stdev=4274.91 00:45:29.640 lat (usec): min=5771, max=55962, avg=10627.08, stdev=4275.36 00:45:29.640 clat percentiles (usec): 00:45:29.640 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[ 8455], 00:45:29.640 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:45:29.640 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12649], 00:45:29.640 | 99.00th=[14353], 99.50th=[49021], 99.90th=[54264], 99.95th=[55837], 00:45:29.640 | 99.99th=[55837] 00:45:29.640 bw ( KiB/s): min=31488, max=41216, per=34.01%, avg=36275.20, stdev=2528.66, samples=10 00:45:29.640 iops : min= 246, max= 322, avg=283.40, stdev=19.76, samples=10 00:45:29.640 lat (msec) : 10=38.45%, 20=60.56%, 50=0.56%, 100=0.42% 00:45:29.640 cpu : usr=94.45%, sys=5.21%, ctx=16, majf=0, minf=1636 00:45:29.640 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.640 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:29.641 filename0: (groupid=0, jobs=1): err= 0: pid=1951157: Mon Dec 9 05:38:42 2024 00:45:29.641 read: IOPS=266, BW=33.3MiB/s (35.0MB/s)(168MiB/5045msec) 00:45:29.641 slat (nsec): min=8989, max=49105, avg=12123.24, stdev=1528.20 00:45:29.641 clat (usec): min=5680, max=55880, avg=11197.23, stdev=4395.42 00:45:29.641 lat (usec): min=5692, max=55929, avg=11209.35, stdev=4395.78 00:45:29.641 clat percentiles (usec): 00:45:29.641 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 8979], 00:45:29.641 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:45:29.641 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:45:29.641 | 99.00th=[47449], 99.50th=[50594], 99.90th=[53740], 99.95th=[55837], 00:45:29.641 | 99.99th=[55837] 00:45:29.641 bw ( KiB/s): min=25344, max=38656, per=32.26%, avg=34406.40, stdev=3694.45, samples=10 00:45:29.641 iops : min= 198, max= 302, avg=268.80, stdev=28.86, samples=10 00:45:29.641 lat (msec) : 10=30.09%, 20=68.87%, 50=0.30%, 100=0.74% 00:45:29.641 cpu : usr=94.31%, sys=5.37%, ctx=11, majf=0, minf=1634 00:45:29.641 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.641 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:29.641 filename0: (groupid=0, jobs=1): err= 0: pid=1951158: Mon Dec 9 05:38:42 2024 00:45:29.641 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(180MiB/5004msec) 00:45:29.641 slat (nsec): min=6124, max=44669, avg=11443.70, stdev=1651.37 00:45:29.641 clat (usec): min=3920, max=53073, avg=10416.31, stdev=8864.64 00:45:29.641 lat (usec): min=3929, max=53083, avg=10427.76, stdev=8864.74 00:45:29.641 clat percentiles (usec): 00:45:29.641 | 1.00th=[ 4424], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7570], 00:45:29.641 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:45:29.641 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11731], 00:45:29.641 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52691], 99.95th=[53216], 00:45:29.641 | 99.99th=[53216] 00:45:29.641 bw ( KiB/s): min=26112, max=41216, per=34.46%, avg=36761.60, stdev=4942.27, samples=10 00:45:29.641 iops : min= 204, max= 322, avg=287.20, stdev=38.61, samples=10 00:45:29.641 lat (msec) : 4=0.07%, 10=85.27%, 20=9.87%, 50=3.61%, 100=1.18% 00:45:29.641 cpu : usr=95.98%, sys=3.72%, ctx=9, majf=0, minf=1634 00:45:29.641 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.641 issued rwts: total=1439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:29.641 00:45:29.641 Run status group 0 (all jobs): 00:45:29.641 READ: bw=104MiB/s (109MB/s), 33.3MiB/s-35.9MiB/s (35.0MB/s-37.7MB/s), io=526MiB (551MB), run=5004-5046msec 00:45:29.641 ----------------------------------------------------- 00:45:29.641 Suppressions used: 00:45:29.641 count bytes template 00:45:29.641 5 44 /usr/src/fio/parse.c 00:45:29.641 1 8 libtcmalloc_minimal.so 00:45:29.641 1 904 libcrypto.so 00:45:29.641 ----------------------------------------------------- 00:45:29.641 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 bdev_null0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 [2024-12-09 05:38:43.544527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 bdev_null1 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 bdev_null2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.641 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:29.902 { 00:45:29.902 "params": { 00:45:29.902 "name": "Nvme$subsystem", 00:45:29.902 "trtype": "$TEST_TRANSPORT", 00:45:29.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:29.902 "adrfam": "ipv4", 00:45:29.902 "trsvcid": "$NVMF_PORT", 00:45:29.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:29.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:29.902 "hdgst": ${hdgst:-false}, 00:45:29.902 "ddgst": ${ddgst:-false} 00:45:29.902 }, 00:45:29.902 "method": "bdev_nvme_attach_controller" 00:45:29.902 } 00:45:29.902 EOF 00:45:29.902 )") 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:29.902 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:29.903 { 00:45:29.903 "params": { 00:45:29.903 "name": "Nvme$subsystem", 00:45:29.903 "trtype": "$TEST_TRANSPORT", 00:45:29.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:29.903 "adrfam": "ipv4", 00:45:29.903 "trsvcid": "$NVMF_PORT", 00:45:29.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:29.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:29.903 "hdgst": ${hdgst:-false}, 00:45:29.903 "ddgst": ${ddgst:-false} 00:45:29.903 }, 00:45:29.903 "method": "bdev_nvme_attach_controller" 00:45:29.903 } 00:45:29.903 EOF 00:45:29.903 )") 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:29.903 { 00:45:29.903 "params": { 00:45:29.903 "name": "Nvme$subsystem", 00:45:29.903 "trtype": "$TEST_TRANSPORT", 00:45:29.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:29.903 "adrfam": "ipv4", 00:45:29.903 "trsvcid": "$NVMF_PORT", 00:45:29.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:29.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:29.903 "hdgst": ${hdgst:-false}, 00:45:29.903 "ddgst": ${ddgst:-false} 00:45:29.903 }, 00:45:29.903 "method": "bdev_nvme_attach_controller" 00:45:29.903 } 00:45:29.903 EOF 00:45:29.903 )") 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:29.903 "params": { 00:45:29.903 "name": "Nvme0", 00:45:29.903 "trtype": "tcp", 00:45:29.903 "traddr": "10.0.0.2", 00:45:29.903 "adrfam": "ipv4", 00:45:29.903 "trsvcid": "4420", 00:45:29.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:29.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:29.903 "hdgst": false, 00:45:29.903 "ddgst": false 00:45:29.903 }, 00:45:29.903 "method": "bdev_nvme_attach_controller" 00:45:29.903 },{ 00:45:29.903 "params": { 00:45:29.903 "name": "Nvme1", 00:45:29.903 "trtype": "tcp", 00:45:29.903 "traddr": "10.0.0.2", 00:45:29.903 "adrfam": "ipv4", 00:45:29.903 "trsvcid": "4420", 00:45:29.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:29.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:29.903 "hdgst": false, 00:45:29.903 "ddgst": false 00:45:29.903 }, 00:45:29.903 "method": "bdev_nvme_attach_controller" 00:45:29.903 },{ 00:45:29.903 "params": { 00:45:29.903 "name": "Nvme2", 00:45:29.903 "trtype": "tcp", 00:45:29.903 "traddr": "10.0.0.2", 00:45:29.903 "adrfam": "ipv4", 00:45:29.903 "trsvcid": "4420", 00:45:29.903 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:29.903 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:29.903 "hdgst": false, 00:45:29.903 "ddgst": false 00:45:29.903 }, 00:45:29.903 "method": "bdev_nvme_attach_controller" 00:45:29.903 }' 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:29.903 05:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.163 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:30.163 ... 00:45:30.163 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:30.163 ... 00:45:30.163 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:30.163 ... 00:45:30.163 fio-3.35 00:45:30.163 Starting 24 threads 00:45:42.422 00:45:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=1952668: Mon Dec 9 05:38:55 2024 00:45:42.422 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:45:42.422 slat (nsec): min=6574, max=95038, avg=22833.52, stdev=13709.96 00:45:42.422 clat (usec): min=15941, max=68675, avg=27835.21, stdev=2341.15 00:45:42.422 lat (usec): min=15954, max=68721, avg=27858.04, stdev=2340.63 00:45:42.422 clat percentiles (usec): 00:45:42.422 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:45:42.422 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:45:42.422 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:45:42.422 | 99.00th=[30278], 99.50th=[32900], 99.90th=[68682], 99.95th=[68682], 00:45:42.422 | 99.99th=[68682] 00:45:42.422 bw ( KiB/s): min= 2048, max= 2432, per=4.17%, avg=2283.26, stdev=76.94, samples=19 00:45:42.422 iops : min= 512, max= 608, avg=570.74, stdev=19.22, samples=19 00:45:42.422 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:45:42.422 cpu : usr=98.32%, sys=1.17%, ctx=151, majf=0, minf=1634 00:45:42.422 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=1952669: Mon Dec 9 05:38:55 2024 00:45:42.422 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.6MiB/10057msec) 00:45:42.422 slat (nsec): min=4449, max=83734, avg=20323.54, stdev=13272.46 00:45:42.422 clat (msec): min=13, max=110, avg=26.50, stdev= 6.31 00:45:42.422 lat (msec): min=14, max=111, avg=26.52, stdev= 6.31 00:45:42.422 clat percentiles (msec): 00:45:42.422 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 23], 00:45:42.422 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.422 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 30], 00:45:42.422 | 99.00th=[ 41], 99.50th=[ 72], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.422 | 99.99th=[ 111] 00:45:42.422 bw ( KiB/s): min= 2048, max= 2768, per=4.40%, avg=2406.15, stdev=191.19, samples=20 00:45:42.422 iops : min= 512, max= 692, avg=601.50, stdev=47.82, samples=20 00:45:42.422 lat (msec) : 20=11.75%, 50=87.72%, 100=0.27%, 250=0.27% 00:45:42.422 cpu : usr=98.47%, sys=1.08%, ctx=126, majf=0, minf=1633 00:45:42.422 IO depths : 1=3.4%, 2=7.4%, 4=17.8%, 8=61.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:45:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 complete : 0=0.0%, 4=92.1%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=1952670: Mon Dec 9 05:38:55 2024 00:45:42.422 read: IOPS=573, BW=2295KiB/s (2350kB/s)(22.5MiB/10057msec) 00:45:42.422 slat (nsec): min=4814, max=84679, avg=15838.73, stdev=12261.25 00:45:42.422 clat (msec): min=9, max=107, avg=27.81, stdev= 5.82 00:45:42.422 lat (msec): min=9, max=107, avg=27.82, stdev= 5.82 00:45:42.422 clat percentiles (msec): 00:45:42.422 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 27], 00:45:42.422 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.422 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 32], 95.00th=[ 35], 00:45:42.422 | 99.00th=[ 42], 99.50th=[ 72], 99.90th=[ 105], 99.95th=[ 108], 00:45:42.422 | 99.99th=[ 108] 00:45:42.422 bw ( KiB/s): min= 2048, max= 2464, per=4.21%, avg=2301.75, stdev=93.85, samples=20 00:45:42.422 iops : min= 512, max= 616, avg=575.40, stdev=23.45, samples=20 00:45:42.422 lat (msec) : 10=0.07%, 20=3.79%, 50=95.58%, 100=0.38%, 250=0.17% 00:45:42.422 cpu : usr=98.35%, sys=1.10%, ctx=182, majf=0, minf=1633 00:45:42.422 IO depths : 1=0.5%, 2=1.1%, 4=4.4%, 8=78.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:45:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 complete : 0=0.0%, 4=89.4%, 8=8.4%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 issued rwts: total=5771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=1952671: Mon Dec 9 05:38:55 2024 00:45:42.422 read: IOPS=577, BW=2309KiB/s (2365kB/s)(22.8MiB/10094msec) 00:45:42.422 slat (nsec): min=6330, max=83602, avg=19695.41, stdev=14278.48 00:45:42.422 clat (msec): min=12, max=107, avg=27.57, stdev= 5.28 00:45:42.422 lat (msec): min=12, max=107, avg=27.59, stdev= 5.28 00:45:42.422 clat percentiles (msec): 00:45:42.422 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 28], 00:45:42.422 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.422 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 32], 00:45:42.422 | 99.00th=[ 40], 99.50th=[ 46], 99.90th=[ 107], 99.95th=[ 108], 00:45:42.422 | 99.99th=[ 108] 00:45:42.422 bw ( KiB/s): min= 2176, max= 2504, per=4.25%, avg=2324.55, stdev=84.55, samples=20 00:45:42.422 iops : min= 544, max= 626, avg=581.10, stdev=21.15, samples=20 00:45:42.422 lat (msec) : 20=2.97%, 50=96.74%, 100=0.02%, 250=0.27% 00:45:42.422 cpu : usr=98.26%, sys=1.32%, ctx=87, majf=0, minf=1636 00:45:42.422 IO depths : 1=3.0%, 2=6.1%, 4=14.2%, 8=65.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:45:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 complete : 0=0.0%, 4=91.6%, 8=4.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.422 issued rwts: total=5828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.422 filename0: (groupid=0, jobs=1): err= 0: pid=1952672: Mon Dec 9 05:38:55 2024 00:45:42.422 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.6MiB/10102msec) 00:45:42.422 slat (nsec): min=6319, max=80746, avg=13233.47, stdev=10611.11 00:45:42.422 clat (msec): min=4, max=105, avg=27.80, stdev= 4.65 00:45:42.422 lat (msec): min=4, max=105, avg=27.81, stdev= 4.65 00:45:42.422 clat percentiles (msec): 00:45:42.422 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.422 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.422 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.422 | 99.00th=[ 30], 99.50th=[ 31], 99.90th=[ 106], 99.95th=[ 106], 00:45:42.422 | 99.99th=[ 106] 00:45:42.422 bw ( KiB/s): min= 2176, max= 2565, per=4.22%, avg=2310.15, stdev=66.41, samples=20 00:45:42.422 iops : min= 544, max= 641, avg=577.45, stdev=16.57, samples=20 00:45:42.422 lat (msec) : 10=0.69%, 20=0.73%, 50=98.31%, 250=0.28% 00:45:42.422 cpu : usr=98.46%, sys=1.10%, ctx=105, majf=0, minf=1632 00:45:42.422 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:42.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename0: (groupid=0, jobs=1): err= 0: pid=1952673: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=574, BW=2296KiB/s (2351kB/s)(22.5MiB/10055msec) 00:45:42.423 slat (nsec): min=6669, max=88281, avg=22433.96, stdev=12504.88 00:45:42.423 clat (msec): min=14, max=105, avg=27.67, stdev= 4.76 00:45:42.423 lat (msec): min=14, max=105, avg=27.69, stdev= 4.76 00:45:42.423 clat percentiles (msec): 00:45:42.423 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.423 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.423 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.423 | 99.00th=[ 31], 99.50th=[ 54], 99.90th=[ 106], 99.95th=[ 106], 00:45:42.423 | 99.99th=[ 106] 00:45:42.423 bw ( KiB/s): min= 2064, max= 2560, per=4.21%, avg=2302.35, stdev=98.01, samples=20 00:45:42.423 iops : min= 516, max= 640, avg=575.55, stdev=24.50, samples=20 00:45:42.423 lat (msec) : 20=2.25%, 50=97.19%, 100=0.28%, 250=0.28% 00:45:42.423 cpu : usr=98.52%, sys=0.97%, ctx=153, majf=0, minf=1633 00:45:42.423 IO depths : 1=5.6%, 2=11.6%, 4=24.1%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename0: (groupid=0, jobs=1): err= 0: pid=1952674: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.4MiB/10078msec) 00:45:42.423 slat (nsec): min=6518, max=73936, avg=22629.43, stdev=11490.04 00:45:42.423 clat (msec): min=16, max=105, avg=27.96, stdev= 4.22 00:45:42.423 lat (msec): min=16, max=105, avg=27.98, stdev= 4.22 00:45:42.423 clat percentiles (msec): 00:45:42.423 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.423 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.423 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.423 | 99.00th=[ 31], 99.50th=[ 38], 99.90th=[ 106], 99.95th=[ 106], 00:45:42.423 | 99.99th=[ 106] 00:45:42.423 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2285.00, stdev=46.99, samples=20 00:45:42.423 iops : min= 544, max= 577, avg=571.25, stdev=11.75, samples=20 00:45:42.423 lat (msec) : 20=0.28%, 50=99.44%, 250=0.28% 00:45:42.423 cpu : usr=97.46%, sys=1.69%, ctx=681, majf=0, minf=1635 00:45:42.423 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename0: (groupid=0, jobs=1): err= 0: pid=1952675: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.4MiB/10074msec) 00:45:42.423 slat (nsec): min=6120, max=73717, avg=22388.72, stdev=10963.04 00:45:42.423 clat (msec): min=12, max=107, avg=27.91, stdev= 4.45 00:45:42.423 lat (msec): min=12, max=107, avg=27.93, stdev= 4.45 00:45:42.423 clat percentiles (msec): 00:45:42.423 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.423 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.423 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.423 | 99.00th=[ 31], 99.50th=[ 39], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.423 | 99.99th=[ 108] 00:45:42.423 bw ( KiB/s): min= 2176, max= 2368, per=4.18%, avg=2287.40, stdev=50.27, samples=20 00:45:42.423 iops : min= 544, max= 592, avg=571.85, stdev=12.57, samples=20 00:45:42.423 lat (msec) : 20=0.70%, 50=99.02%, 250=0.28% 00:45:42.423 cpu : usr=98.42%, sys=1.18%, ctx=68, majf=0, minf=1635 00:45:42.423 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename1: (groupid=0, jobs=1): err= 0: pid=1952676: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=577, BW=2309KiB/s (2365kB/s)(22.7MiB/10065msec) 00:45:42.423 slat (nsec): min=6445, max=49869, avg=11785.32, stdev=4397.67 00:45:42.423 clat (usec): min=3977, max=79813, avg=27613.66, stdev=3830.75 00:45:42.423 lat (usec): min=3990, max=79825, avg=27625.45, stdev=3830.79 00:45:42.423 clat percentiles (usec): 00:45:42.423 | 1.00th=[11207], 5.00th=[26608], 10.00th=[27132], 20.00th=[27395], 00:45:42.423 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:45:42.423 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28705], 95.00th=[29230], 00:45:42.423 | 99.00th=[30016], 99.50th=[31589], 99.90th=[80217], 99.95th=[80217], 00:45:42.423 | 99.99th=[80217] 00:45:42.423 bw ( KiB/s): min= 2176, max= 2840, per=4.23%, avg=2317.50, stdev=135.58, samples=20 00:45:42.423 iops : min= 544, max= 710, avg=579.30, stdev=33.91, samples=20 00:45:42.423 lat (msec) : 4=0.03%, 10=0.74%, 20=1.34%, 50=97.61%, 100=0.28% 00:45:42.423 cpu : usr=98.56%, sys=1.13%, ctx=14, majf=0, minf=1632 00:45:42.423 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename1: (groupid=0, jobs=1): err= 0: pid=1952677: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=576, BW=2307KiB/s (2362kB/s)(22.7MiB/10065msec) 00:45:42.423 slat (nsec): min=4304, max=80373, avg=20890.74, stdev=12322.05 00:45:42.423 clat (msec): min=12, max=111, avg=27.56, stdev= 5.25 00:45:42.423 lat (msec): min=12, max=111, avg=27.58, stdev= 5.25 00:45:42.423 clat percentiles (msec): 00:45:42.423 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 28], 00:45:42.423 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.423 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 30], 00:45:42.423 | 99.00th=[ 40], 99.50th=[ 53], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.423 | 99.99th=[ 111] 00:45:42.423 bw ( KiB/s): min= 2144, max= 2672, per=4.23%, avg=2314.90, stdev=105.23, samples=20 00:45:42.423 iops : min= 536, max= 668, avg=578.65, stdev=26.29, samples=20 00:45:42.423 lat (msec) : 20=3.72%, 50=95.73%, 100=0.28%, 250=0.28% 00:45:42.423 cpu : usr=98.60%, sys=1.09%, ctx=15, majf=0, minf=1633 00:45:42.423 IO depths : 1=4.8%, 2=9.9%, 4=20.8%, 8=56.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:45:42.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.423 issued rwts: total=5804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.423 filename1: (groupid=0, jobs=1): err= 0: pid=1952678: Mon Dec 9 05:38:55 2024 00:45:42.423 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.4MiB/10078msec) 00:45:42.423 slat (nsec): min=6396, max=73102, avg=17249.50, stdev=10456.47 00:45:42.423 clat (msec): min=11, max=105, avg=28.01, stdev= 4.25 00:45:42.423 lat (msec): min=11, max=105, avg=28.02, stdev= 4.25 00:45:42.423 clat percentiles (msec): 00:45:42.423 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.424 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.424 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.424 | 99.00th=[ 31], 99.50th=[ 39], 99.90th=[ 106], 99.95th=[ 106], 00:45:42.424 | 99.99th=[ 106] 00:45:42.424 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2285.00, stdev=46.99, samples=20 00:45:42.424 iops : min= 544, max= 577, avg=571.25, stdev=11.75, samples=20 00:45:42.424 lat (msec) : 20=0.35%, 50=99.37%, 250=0.28% 00:45:42.424 cpu : usr=98.33%, sys=1.20%, ctx=105, majf=0, minf=1631 00:45:42.424 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=1952679: Mon Dec 9 05:38:55 2024 00:45:42.424 read: IOPS=566, BW=2265KiB/s (2320kB/s)(22.2MiB/10057msec) 00:45:42.424 slat (nsec): min=4532, max=95836, avg=25064.59, stdev=12851.90 00:45:42.424 clat (msec): min=20, max=107, avg=28.02, stdev= 4.56 00:45:42.424 lat (msec): min=20, max=107, avg=28.05, stdev= 4.56 00:45:42.424 clat percentiles (msec): 00:45:42.424 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.424 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.424 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.424 | 99.00th=[ 31], 99.50th=[ 58], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.424 | 99.99th=[ 108] 00:45:42.424 bw ( KiB/s): min= 2048, max= 2432, per=4.15%, avg=2271.50, stdev=91.52, samples=20 00:45:42.424 iops : min= 512, max= 608, avg=567.80, stdev=22.86, samples=20 00:45:42.424 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:45:42.424 cpu : usr=98.74%, sys=0.83%, ctx=118, majf=0, minf=1633 00:45:42.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=1952680: Mon Dec 9 05:38:55 2024 00:45:42.424 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.3MiB/10052msec) 00:45:42.424 slat (nsec): min=6314, max=92852, avg=21220.54, stdev=15094.02 00:45:42.424 clat (msec): min=16, max=107, avg=27.97, stdev= 4.70 00:45:42.424 lat (msec): min=16, max=107, avg=27.99, stdev= 4.70 00:45:42.424 clat percentiles (msec): 00:45:42.424 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.424 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.424 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.424 | 99.00th=[ 34], 99.50th=[ 54], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.424 | 99.99th=[ 108] 00:45:42.424 bw ( KiB/s): min= 2048, max= 2448, per=4.17%, avg=2279.15, stdev=89.98, samples=20 00:45:42.424 iops : min= 512, max= 612, avg=569.75, stdev=22.49, samples=20 00:45:42.424 lat (msec) : 20=1.02%, 50=98.42%, 100=0.28%, 250=0.28% 00:45:42.424 cpu : usr=98.60%, sys=1.09%, ctx=13, majf=0, minf=1633 00:45:42.424 IO depths : 1=5.8%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=1952681: Mon Dec 9 05:38:55 2024 00:45:42.424 read: IOPS=543, BW=2176KiB/s (2228kB/s)(21.4MiB/10057msec) 00:45:42.424 slat (nsec): min=4299, max=84540, avg=19826.15, stdev=12457.98 00:45:42.424 clat (msec): min=13, max=114, avg=29.21, stdev= 6.23 00:45:42.424 lat (msec): min=13, max=114, avg=29.23, stdev= 6.23 00:45:42.424 clat percentiles (msec): 00:45:42.424 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.424 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.424 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 36], 95.00th=[ 39], 00:45:42.424 | 99.00th=[ 44], 99.50th=[ 60], 99.90th=[ 115], 99.95th=[ 115], 00:45:42.424 | 99.99th=[ 115] 00:45:42.424 bw ( KiB/s): min= 1920, max= 2304, per=3.99%, avg=2181.35, stdev=123.27, samples=20 00:45:42.424 iops : min= 480, max= 576, avg=545.30, stdev=30.86, samples=20 00:45:42.424 lat (msec) : 20=1.52%, 50=97.90%, 100=0.37%, 250=0.22% 00:45:42.424 cpu : usr=98.64%, sys=1.05%, ctx=12, majf=0, minf=1634 00:45:42.424 IO depths : 1=4.7%, 2=9.4%, 4=19.7%, 8=57.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:45:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 complete : 0=0.0%, 4=92.7%, 8=2.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 issued rwts: total=5470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=1952682: Mon Dec 9 05:38:55 2024 00:45:42.424 read: IOPS=575, BW=2303KiB/s (2358kB/s)(22.6MiB/10062msec) 00:45:42.424 slat (nsec): min=6229, max=73569, avg=15226.32, stdev=9201.76 00:45:42.424 clat (usec): min=5392, max=79933, avg=27648.94, stdev=3647.07 00:45:42.424 lat (usec): min=5401, max=79960, avg=27664.17, stdev=3647.30 00:45:42.424 clat percentiles (usec): 00:45:42.424 | 1.00th=[12125], 5.00th=[26608], 10.00th=[27132], 20.00th=[27395], 00:45:42.424 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:45:42.424 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28705], 95.00th=[28967], 00:45:42.424 | 99.00th=[30016], 99.50th=[31589], 99.90th=[80217], 99.95th=[80217], 00:45:42.424 | 99.99th=[80217] 00:45:42.424 bw ( KiB/s): min= 2176, max= 2693, per=4.22%, avg=2310.15, stdev=114.16, samples=20 00:45:42.424 iops : min= 544, max= 673, avg=577.45, stdev=28.46, samples=20 00:45:42.424 lat (msec) : 10=0.79%, 20=1.00%, 50=97.93%, 100=0.28% 00:45:42.424 cpu : usr=98.45%, sys=1.07%, ctx=116, majf=0, minf=1638 00:45:42.424 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:42.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.424 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.424 filename1: (groupid=0, jobs=1): err= 0: pid=1952685: Mon Dec 9 05:38:55 2024 00:45:42.424 read: IOPS=573, BW=2295KiB/s (2350kB/s)(22.6MiB/10104msec) 00:45:42.424 slat (nsec): min=6242, max=79255, avg=20988.58, stdev=12465.67 00:45:42.424 clat (msec): min=5, max=106, avg=27.72, stdev= 5.24 00:45:42.424 lat (msec): min=5, max=106, avg=27.74, stdev= 5.24 00:45:42.424 clat percentiles (msec): 00:45:42.424 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.424 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.424 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 30], 00:45:42.424 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 107], 99.95th=[ 107], 00:45:42.424 | 99.99th=[ 107] 00:45:42.424 bw ( KiB/s): min= 2176, max= 2560, per=4.22%, avg=2311.50, stdev=65.99, samples=20 00:45:42.424 iops : min= 544, max= 640, avg=577.80, stdev=16.50, samples=20 00:45:42.424 lat (msec) : 10=0.66%, 20=3.16%, 50=95.91%, 100=0.03%, 250=0.24% 00:45:42.425 cpu : usr=98.10%, sys=1.28%, ctx=311, majf=0, minf=1635 00:45:42.425 IO depths : 1=2.5%, 2=8.4%, 4=24.2%, 8=54.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.425 filename2: (groupid=0, jobs=1): err= 0: pid=1952686: Mon Dec 9 05:38:55 2024 00:45:42.425 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.4MiB/10058msec) 00:45:42.425 slat (nsec): min=4340, max=90090, avg=24481.26, stdev=14268.06 00:45:42.425 clat (msec): min=12, max=107, avg=27.80, stdev= 5.11 00:45:42.425 lat (msec): min=12, max=107, avg=27.83, stdev= 5.11 00:45:42.425 clat percentiles (msec): 00:45:42.425 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 28], 00:45:42.425 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.425 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 30], 00:45:42.425 | 99.00th=[ 39], 99.50th=[ 59], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.425 | 99.99th=[ 108] 00:45:42.425 bw ( KiB/s): min= 2048, max= 2560, per=4.19%, avg=2290.70, stdev=118.13, samples=20 00:45:42.425 iops : min= 512, max= 640, avg=572.60, stdev=29.52, samples=20 00:45:42.425 lat (msec) : 20=2.00%, 50=97.44%, 100=0.28%, 250=0.28% 00:45:42.425 cpu : usr=98.50%, sys=1.07%, ctx=92, majf=0, minf=1632 00:45:42.425 IO depths : 1=4.0%, 2=9.6%, 4=22.7%, 8=55.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.425 filename2: (groupid=0, jobs=1): err= 0: pid=1952687: Mon Dec 9 05:38:55 2024 00:45:42.425 read: IOPS=566, BW=2266KiB/s (2320kB/s)(22.2MiB/10056msec) 00:45:42.425 slat (nsec): min=4336, max=89639, avg=26298.72, stdev=13730.82 00:45:42.425 clat (msec): min=20, max=107, avg=28.00, stdev= 4.54 00:45:42.425 lat (msec): min=20, max=107, avg=28.03, stdev= 4.54 00:45:42.425 clat percentiles (msec): 00:45:42.425 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.425 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.425 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.425 | 99.00th=[ 31], 99.50th=[ 58], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.425 | 99.99th=[ 108] 00:45:42.425 bw ( KiB/s): min= 2048, max= 2432, per=4.15%, avg=2271.70, stdev=91.01, samples=20 00:45:42.425 iops : min= 512, max= 608, avg=567.85, stdev=22.73, samples=20 00:45:42.425 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:45:42.425 cpu : usr=98.79%, sys=0.91%, ctx=13, majf=0, minf=1634 00:45:42.425 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.425 filename2: (groupid=0, jobs=1): err= 0: pid=1952688: Mon Dec 9 05:38:55 2024 00:45:42.425 read: IOPS=577, BW=2309KiB/s (2364kB/s)(22.7MiB/10063msec) 00:45:42.425 slat (nsec): min=6378, max=66777, avg=14430.55, stdev=7884.33 00:45:42.425 clat (usec): min=2793, max=79868, avg=27592.02, stdev=3871.18 00:45:42.425 lat (usec): min=2804, max=79884, avg=27606.45, stdev=3871.49 00:45:42.425 clat percentiles (usec): 00:45:42.425 | 1.00th=[ 8848], 5.00th=[26608], 10.00th=[27132], 20.00th=[27395], 00:45:42.425 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:45:42.425 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28705], 95.00th=[28967], 00:45:42.425 | 99.00th=[30016], 99.50th=[31589], 99.90th=[80217], 99.95th=[80217], 00:45:42.425 | 99.99th=[80217] 00:45:42.425 bw ( KiB/s): min= 2176, max= 2688, per=4.23%, avg=2316.30, stdev=108.85, samples=20 00:45:42.425 iops : min= 544, max= 672, avg=579.00, stdev=27.18, samples=20 00:45:42.425 lat (msec) : 4=0.14%, 10=0.93%, 20=1.00%, 50=97.66%, 100=0.28% 00:45:42.425 cpu : usr=98.41%, sys=1.12%, ctx=102, majf=0, minf=1636 00:45:42.425 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.425 filename2: (groupid=0, jobs=1): err= 0: pid=1952689: Mon Dec 9 05:38:55 2024 00:45:42.425 read: IOPS=566, BW=2266KiB/s (2321kB/s)(22.2MiB/10054msec) 00:45:42.425 slat (nsec): min=6243, max=87527, avg=17730.91, stdev=14586.54 00:45:42.425 clat (msec): min=21, max=107, avg=28.10, stdev= 4.51 00:45:42.425 lat (msec): min=21, max=107, avg=28.12, stdev= 4.51 00:45:42.425 clat percentiles (msec): 00:45:42.425 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.425 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.425 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.425 | 99.00th=[ 31], 99.50th=[ 56], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.425 | 99.99th=[ 108] 00:45:42.425 bw ( KiB/s): min= 2048, max= 2308, per=4.15%, avg=2271.95, stdev=81.74, samples=20 00:45:42.425 iops : min= 512, max= 577, avg=567.95, stdev=20.42, samples=20 00:45:42.425 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:45:42.425 cpu : usr=98.81%, sys=0.87%, ctx=31, majf=0, minf=1635 00:45:42.425 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.425 filename2: (groupid=0, jobs=1): err= 0: pid=1952690: Mon Dec 9 05:38:55 2024 00:45:42.425 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.3MiB/10076msec) 00:45:42.425 slat (usec): min=6, max=101, avg=24.49, stdev=17.78 00:45:42.425 clat (msec): min=15, max=110, avg=27.95, stdev= 4.63 00:45:42.425 lat (msec): min=15, max=110, avg=27.97, stdev= 4.63 00:45:42.425 clat percentiles (msec): 00:45:42.425 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 28], 00:45:42.425 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.425 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.425 | 99.00th=[ 34], 99.50th=[ 50], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.425 | 99.99th=[ 111] 00:45:42.425 bw ( KiB/s): min= 2176, max= 2400, per=4.17%, avg=2281.35, stdev=61.87, samples=20 00:45:42.425 iops : min= 544, max= 600, avg=570.30, stdev=15.43, samples=20 00:45:42.425 lat (msec) : 20=0.52%, 50=99.16%, 100=0.03%, 250=0.28% 00:45:42.425 cpu : usr=98.04%, sys=1.38%, ctx=104, majf=0, minf=1635 00:45:42.425 IO depths : 1=4.7%, 2=9.5%, 4=20.0%, 8=56.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:45:42.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.425 issued rwts: total=5720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.426 filename2: (groupid=0, jobs=1): err= 0: pid=1952691: Mon Dec 9 05:38:55 2024 00:45:42.426 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.4MiB/10077msec) 00:45:42.426 slat (nsec): min=5611, max=86429, avg=24014.79, stdev=13810.39 00:45:42.426 clat (msec): min=15, max=107, avg=27.95, stdev= 4.32 00:45:42.426 lat (msec): min=15, max=107, avg=27.97, stdev= 4.32 00:45:42.426 clat percentiles (msec): 00:45:42.426 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:45:42.426 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.426 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.426 | 99.00th=[ 31], 99.50th=[ 37], 99.90th=[ 107], 99.95th=[ 108], 00:45:42.426 | 99.99th=[ 108] 00:45:42.426 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2284.55, stdev=46.80, samples=20 00:45:42.426 iops : min= 544, max= 576, avg=571.10, stdev=11.69, samples=20 00:45:42.426 lat (msec) : 20=0.28%, 50=99.44%, 250=0.28% 00:45:42.426 cpu : usr=98.76%, sys=0.90%, ctx=56, majf=0, minf=1634 00:45:42.426 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:42.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.426 filename2: (groupid=0, jobs=1): err= 0: pid=1952692: Mon Dec 9 05:38:55 2024 00:45:42.426 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.5MiB/10086msec) 00:45:42.426 slat (nsec): min=4496, max=87097, avg=16702.88, stdev=12486.88 00:45:42.426 clat (msec): min=9, max=135, avg=27.82, stdev= 6.11 00:45:42.426 lat (msec): min=9, max=135, avg=27.83, stdev= 6.11 00:45:42.426 clat percentiles (msec): 00:45:42.426 | 1.00th=[ 17], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 28], 00:45:42.426 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.426 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 31], 00:45:42.426 | 99.00th=[ 42], 99.50th=[ 73], 99.90th=[ 136], 99.95th=[ 136], 00:45:42.426 | 99.99th=[ 136] 00:45:42.426 bw ( KiB/s): min= 2059, max= 2464, per=4.20%, avg=2297.10, stdev=87.47, samples=20 00:45:42.426 iops : min= 514, max= 616, avg=574.20, stdev=21.98, samples=20 00:45:42.426 lat (msec) : 10=0.07%, 20=3.96%, 50=95.42%, 100=0.38%, 250=0.17% 00:45:42.426 cpu : usr=98.53%, sys=1.13%, ctx=61, majf=0, minf=1633 00:45:42.426 IO depths : 1=1.2%, 2=4.5%, 4=14.0%, 8=67.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:45:42.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.426 filename2: (groupid=0, jobs=1): err= 0: pid=1952694: Mon Dec 9 05:38:55 2024 00:45:42.426 read: IOPS=584, BW=2339KiB/s (2395kB/s)(23.0MiB/10062msec) 00:45:42.426 slat (nsec): min=4339, max=96558, avg=22095.63, stdev=13512.05 00:45:42.426 clat (msec): min=14, max=107, avg=27.16, stdev= 5.50 00:45:42.426 lat (msec): min=14, max=107, avg=27.18, stdev= 5.50 00:45:42.426 clat percentiles (msec): 00:45:42.426 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 22], 20.00th=[ 28], 00:45:42.426 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:45:42.426 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:45:42.426 | 99.00th=[ 43], 99.50th=[ 54], 99.90th=[ 108], 99.95th=[ 108], 00:45:42.426 | 99.99th=[ 108] 00:45:42.426 bw ( KiB/s): min= 2048, max= 2880, per=4.29%, avg=2346.70, stdev=168.96, samples=20 00:45:42.426 iops : min= 512, max= 720, avg=586.60, stdev=42.23, samples=20 00:45:42.426 lat (msec) : 20=6.00%, 50=93.46%, 100=0.27%, 250=0.27% 00:45:42.426 cpu : usr=98.98%, sys=0.74%, ctx=14, majf=0, minf=1633 00:45:42.426 IO depths : 1=4.7%, 2=9.7%, 4=21.1%, 8=56.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:45:42.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.426 issued rwts: total=5884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:42.426 00:45:42.426 Run status group 0 (all jobs): 00:45:42.426 READ: bw=53.4MiB/s (56.0MB/s), 2176KiB/s-2399KiB/s (2228kB/s-2457kB/s), io=540MiB (566MB), run=10007-10104msec 00:45:42.426 ----------------------------------------------------- 00:45:42.426 Suppressions used: 00:45:42.426 count bytes template 00:45:42.426 45 402 /usr/src/fio/parse.c 00:45:42.426 1 8 libtcmalloc_minimal.so 00:45:42.426 1 904 libcrypto.so 00:45:42.426 ----------------------------------------------------- 00:45:42.426 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:42.687 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 bdev_null0 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 [2024-12-09 05:38:56.527704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 bdev_null1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:42.688 { 00:45:42.688 "params": { 00:45:42.688 "name": "Nvme$subsystem", 00:45:42.688 "trtype": "$TEST_TRANSPORT", 00:45:42.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:42.688 "adrfam": "ipv4", 00:45:42.688 "trsvcid": "$NVMF_PORT", 00:45:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:42.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:42.688 "hdgst": ${hdgst:-false}, 00:45:42.688 "ddgst": ${ddgst:-false} 00:45:42.688 }, 00:45:42.688 "method": "bdev_nvme_attach_controller" 00:45:42.688 } 00:45:42.688 EOF 00:45:42.688 )") 00:45:42.688 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:42.689 { 00:45:42.689 "params": { 00:45:42.689 "name": "Nvme$subsystem", 00:45:42.689 "trtype": "$TEST_TRANSPORT", 00:45:42.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:42.689 "adrfam": "ipv4", 00:45:42.689 "trsvcid": "$NVMF_PORT", 00:45:42.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:42.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:42.689 "hdgst": ${hdgst:-false}, 00:45:42.689 "ddgst": ${ddgst:-false} 00:45:42.689 }, 00:45:42.689 "method": "bdev_nvme_attach_controller" 00:45:42.689 } 00:45:42.689 EOF 00:45:42.689 )") 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:42.689 "params": { 00:45:42.689 "name": "Nvme0", 00:45:42.689 "trtype": "tcp", 00:45:42.689 "traddr": "10.0.0.2", 00:45:42.689 "adrfam": "ipv4", 00:45:42.689 "trsvcid": "4420", 00:45:42.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:42.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:42.689 "hdgst": false, 00:45:42.689 "ddgst": false 00:45:42.689 }, 00:45:42.689 "method": "bdev_nvme_attach_controller" 00:45:42.689 },{ 00:45:42.689 "params": { 00:45:42.689 "name": "Nvme1", 00:45:42.689 "trtype": "tcp", 00:45:42.689 "traddr": "10.0.0.2", 00:45:42.689 "adrfam": "ipv4", 00:45:42.689 "trsvcid": "4420", 00:45:42.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:42.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:42.689 "hdgst": false, 00:45:42.689 "ddgst": false 00:45:42.689 }, 00:45:42.689 "method": "bdev_nvme_attach_controller" 00:45:42.689 }' 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:42.689 05:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:43.273 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:43.273 ... 00:45:43.273 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:43.273 ... 00:45:43.273 fio-3.35 00:45:43.273 Starting 4 threads 00:45:49.931 00:45:49.931 filename0: (groupid=0, jobs=1): err= 0: pid=1955191: Mon Dec 9 05:39:03 2024 00:45:49.931 read: IOPS=2572, BW=20.1MiB/s (21.1MB/s)(101MiB/5002msec) 00:45:49.931 slat (nsec): min=5997, max=50593, avg=9062.31, stdev=2046.24 00:45:49.931 clat (usec): min=2123, max=44698, avg=3083.35, stdev=1044.61 00:45:49.931 lat (usec): min=2129, max=44749, avg=3092.41, stdev=1044.85 00:45:49.931 clat percentiles (usec): 00:45:49.931 | 1.00th=[ 2737], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3032], 00:45:49.931 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:45:49.931 | 70.00th=[ 3064], 80.00th=[ 3064], 90.00th=[ 3130], 95.00th=[ 3294], 00:45:49.931 | 99.00th=[ 3425], 99.50th=[ 3687], 99.90th=[ 4817], 99.95th=[44827], 00:45:49.931 | 99.99th=[44827] 00:45:49.931 bw ( KiB/s): min=18976, max=20944, per=24.68%, avg=20568.89, stdev=613.40, samples=9 00:45:49.931 iops : min= 2372, max= 2618, avg=2571.11, stdev=76.68, samples=9 00:45:49.931 lat (msec) : 4=99.70%, 10=0.23%, 50=0.06% 00:45:49.931 cpu : usr=95.88%, sys=3.78%, ctx=9, majf=0, minf=1634 00:45:49.931 IO depths : 1=0.1%, 2=0.1%, 4=74.9%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 issued rwts: total=12866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:49.931 filename0: (groupid=0, jobs=1): err= 0: pid=1955192: Mon Dec 9 05:39:03 2024 00:45:49.931 read: IOPS=2599, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:45:49.931 slat (nsec): min=5980, max=46824, avg=8162.53, stdev=2973.59 00:45:49.931 clat (usec): min=1443, max=4858, avg=3053.44, stdev=153.18 00:45:49.931 lat (usec): min=1452, max=4895, avg=3061.60, stdev=152.94 00:45:49.931 clat percentiles (usec): 00:45:49.931 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 2999], 20.00th=[ 3032], 00:45:49.931 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:45:49.931 | 70.00th=[ 3064], 80.00th=[ 3064], 90.00th=[ 3097], 95.00th=[ 3326], 00:45:49.931 | 99.00th=[ 3425], 99.50th=[ 3556], 99.90th=[ 4490], 99.95th=[ 4555], 00:45:49.931 | 99.99th=[ 4752] 00:45:49.931 bw ( KiB/s): min=20528, max=20985, per=24.95%, avg=20795.67, stdev=152.74, samples=9 00:45:49.931 iops : min= 2566, max= 2623, avg=2599.44, stdev=19.07, samples=9 00:45:49.931 lat (msec) : 2=0.25%, 4=99.50%, 10=0.25% 00:45:49.931 cpu : usr=94.38%, sys=4.42%, ctx=229, majf=0, minf=1634 00:45:49.931 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 issued rwts: total=12999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:49.931 filename1: (groupid=0, jobs=1): err= 0: pid=1955193: Mon Dec 9 05:39:03 2024 00:45:49.931 read: IOPS=2596, BW=20.3MiB/s (21.3MB/s)(101MiB/5001msec) 00:45:49.931 slat (nsec): min=5997, max=42875, avg=7209.34, stdev=2041.01 00:45:49.931 clat (usec): min=1065, max=5227, avg=3062.57, stdev=176.82 00:45:49.931 lat (usec): min=1072, max=5263, avg=3069.78, stdev=176.66 00:45:49.931 clat percentiles (usec): 00:45:49.931 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 2999], 20.00th=[ 3032], 00:45:49.931 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3064], 00:45:49.931 | 70.00th=[ 3064], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3359], 00:45:49.931 | 99.00th=[ 3621], 99.50th=[ 3884], 99.90th=[ 4686], 99.95th=[ 4948], 00:45:49.931 | 99.99th=[ 5014] 00:45:49.931 bw ( KiB/s): min=20496, max=21008, per=24.93%, avg=20773.33, stdev=160.40, samples=9 00:45:49.931 iops : min= 2562, max= 2626, avg=2596.67, stdev=20.05, samples=9 00:45:49.931 lat (msec) : 2=0.20%, 4=99.41%, 10=0.39% 00:45:49.931 cpu : usr=95.88%, sys=3.78%, ctx=14, majf=0, minf=1637 00:45:49.931 IO depths : 1=0.1%, 2=0.1%, 4=68.9%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 issued rwts: total=12985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:49.931 filename1: (groupid=0, jobs=1): err= 0: pid=1955194: Mon Dec 9 05:39:03 2024 00:45:49.931 read: IOPS=2650, BW=20.7MiB/s (21.7MB/s)(104MiB/5001msec) 00:45:49.931 slat (nsec): min=6011, max=40952, avg=7101.98, stdev=1799.93 00:45:49.931 clat (usec): min=1378, max=6258, avg=3001.30, stdev=265.36 00:45:49.931 lat (usec): min=1384, max=6299, avg=3008.40, stdev=265.31 00:45:49.931 clat percentiles (usec): 00:45:49.931 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2999], 00:45:49.931 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3064], 00:45:49.931 | 70.00th=[ 3064], 80.00th=[ 3064], 90.00th=[ 3097], 95.00th=[ 3195], 00:45:49.931 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 4178], 99.95th=[ 5932], 00:45:49.931 | 99.99th=[ 5997] 00:45:49.931 bw ( KiB/s): min=20864, max=22080, per=25.50%, avg=21248.00, stdev=407.06, samples=9 00:45:49.931 iops : min= 2608, max= 2760, avg=2656.00, stdev=50.88, samples=9 00:45:49.931 lat (msec) : 2=0.23%, 4=98.94%, 10=0.83% 00:45:49.931 cpu : usr=95.62%, sys=4.06%, ctx=6, majf=0, minf=1634 00:45:49.931 IO depths : 1=0.1%, 2=0.1%, 4=65.4%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.931 issued rwts: total=13257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:49.931 00:45:49.932 Run status group 0 (all jobs): 00:45:49.932 READ: bw=81.4MiB/s (85.3MB/s), 20.1MiB/s-20.7MiB/s (21.1MB/s-21.7MB/s), io=407MiB (427MB), run=5001-5002msec 00:45:49.932 ----------------------------------------------------- 00:45:49.932 Suppressions used: 00:45:49.932 count bytes template 00:45:49.932 6 52 /usr/src/fio/parse.c 00:45:49.932 1 8 libtcmalloc_minimal.so 00:45:49.932 1 904 libcrypto.so 00:45:49.932 ----------------------------------------------------- 00:45:49.932 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.932 00:45:49.932 real 0m27.611s 00:45:49.932 user 5m20.968s 00:45:49.932 sys 0m6.090s 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.932 ************************************ 00:45:49.932 END TEST fio_dif_rand_params 00:45:49.932 ************************************ 00:45:49.932 05:39:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:49.932 05:39:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:49.932 05:39:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:49.932 05:39:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:50.210 ************************************ 00:45:50.210 START TEST fio_dif_digest 00:45:50.210 ************************************ 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.210 bdev_null0 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.210 [2024-12-09 05:39:03.978275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.210 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:50.211 { 00:45:50.211 "params": { 00:45:50.211 "name": "Nvme$subsystem", 00:45:50.211 "trtype": "$TEST_TRANSPORT", 00:45:50.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:50.211 "adrfam": "ipv4", 00:45:50.211 "trsvcid": "$NVMF_PORT", 00:45:50.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:50.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:50.211 "hdgst": ${hdgst:-false}, 00:45:50.211 "ddgst": ${ddgst:-false} 00:45:50.211 }, 00:45:50.211 "method": "bdev_nvme_attach_controller" 00:45:50.211 } 00:45:50.211 EOF 00:45:50.211 )") 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:50.211 05:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:50.211 "params": { 00:45:50.211 "name": "Nvme0", 00:45:50.211 "trtype": "tcp", 00:45:50.211 "traddr": "10.0.0.2", 00:45:50.211 "adrfam": "ipv4", 00:45:50.211 "trsvcid": "4420", 00:45:50.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:50.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:50.211 "hdgst": true, 00:45:50.211 "ddgst": true 00:45:50.211 }, 00:45:50.211 "method": "bdev_nvme_attach_controller" 00:45:50.211 }' 00:45:50.211 05:39:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:50.211 05:39:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:50.211 05:39:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:50.211 05:39:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:50.211 05:39:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.472 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:50.472 ... 00:45:50.472 fio-3.35 00:45:50.472 Starting 3 threads 00:46:02.700 00:46:02.700 filename0: (groupid=0, jobs=1): err= 0: pid=1956824: Mon Dec 9 05:39:15 2024 00:46:02.700 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(312MiB/10048msec) 00:46:02.700 slat (nsec): min=6527, max=47331, avg=11555.05, stdev=1991.52 00:46:02.700 clat (usec): min=7931, max=53504, avg=12035.67, stdev=1421.63 00:46:02.700 lat (usec): min=7951, max=53514, avg=12047.22, stdev=1421.62 00:46:02.700 clat percentiles (usec): 00:46:02.700 | 1.00th=[10028], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:46:02.700 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:46:02.700 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:46:02.700 | 99.00th=[14222], 99.50th=[14484], 99.90th=[16712], 99.95th=[49021], 00:46:02.700 | 99.99th=[53740] 00:46:02.700 bw ( KiB/s): min=31232, max=32512, per=31.72%, avg=31948.80, stdev=367.71, samples=20 00:46:02.700 iops : min= 244, max= 254, avg=249.60, stdev= 2.87, samples=20 00:46:02.700 lat (msec) : 10=1.00%, 20=98.92%, 50=0.04%, 100=0.04% 00:46:02.700 cpu : usr=94.08%, sys=5.34%, ctx=404, majf=0, minf=1631 00:46:02.700 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 issued rwts: total=2498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:02.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:02.700 filename0: (groupid=0, jobs=1): err= 0: pid=1956825: Mon Dec 9 05:39:15 2024 00:46:02.700 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10045msec) 00:46:02.700 slat (nsec): min=6520, max=44484, avg=10914.80, stdev=1739.36 00:46:02.700 clat (usec): min=9461, max=54428, avg=12536.14, stdev=2033.72 00:46:02.700 lat (usec): min=9471, max=54473, avg=12547.05, stdev=2034.06 00:46:02.700 clat percentiles (usec): 00:46:02.700 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11338], 20.00th=[11600], 00:46:02.700 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:46:02.700 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:46:02.700 | 99.00th=[14877], 99.50th=[15401], 99.90th=[54264], 99.95th=[54264], 00:46:02.700 | 99.99th=[54264] 00:46:02.700 bw ( KiB/s): min=28416, max=31488, per=30.45%, avg=30668.80, stdev=667.57, samples=20 00:46:02.700 iops : min= 222, max= 246, avg=239.60, stdev= 5.22, samples=20 00:46:02.700 lat (msec) : 10=0.17%, 20=99.62%, 50=0.04%, 100=0.17% 00:46:02.700 cpu : usr=94.61%, sys=5.11%, ctx=15, majf=0, minf=1639 00:46:02.700 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:02.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:02.700 filename0: (groupid=0, jobs=1): err= 0: pid=1956826: Mon Dec 9 05:39:15 2024 00:46:02.700 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(376MiB/10046msec) 00:46:02.700 slat (nsec): min=6505, max=51444, avg=10046.03, stdev=1741.95 00:46:02.700 clat (usec): min=6445, max=49421, avg=9986.67, stdev=1225.48 00:46:02.700 lat (usec): min=6458, max=49430, avg=9996.72, stdev=1225.33 00:46:02.700 clat percentiles (usec): 00:46:02.700 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:46:02.700 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:46:02.700 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11076], 00:46:02.700 | 99.00th=[11469], 99.50th=[11600], 99.90th=[12125], 99.95th=[46924], 00:46:02.700 | 99.99th=[49546] 00:46:02.700 bw ( KiB/s): min=37120, max=39424, per=38.23%, avg=38502.40, stdev=686.92, samples=20 00:46:02.700 iops : min= 290, max= 308, avg=300.80, stdev= 5.37, samples=20 00:46:02.700 lat (msec) : 10=50.17%, 20=49.77%, 50=0.07% 00:46:02.700 cpu : usr=92.81%, sys=5.18%, ctx=719, majf=0, minf=1636 00:46:02.700 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:02.700 issued rwts: total=3010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:02.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:02.700 00:46:02.700 Run status group 0 (all jobs): 00:46:02.700 READ: bw=98.4MiB/s (103MB/s), 29.8MiB/s-37.5MiB/s (31.3MB/s-39.3MB/s), io=988MiB (1036MB), run=10045-10048msec 00:46:02.700 ----------------------------------------------------- 00:46:02.700 Suppressions used: 00:46:02.700 count bytes template 00:46:02.700 5 44 /usr/src/fio/parse.c 00:46:02.700 1 8 libtcmalloc_minimal.so 00:46:02.700 1 904 libcrypto.so 00:46:02.700 ----------------------------------------------------- 00:46:02.700 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.700 00:46:02.700 real 0m12.248s 00:46:02.700 user 0m42.737s 00:46:02.700 sys 0m2.180s 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:02.700 05:39:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:02.700 ************************************ 00:46:02.700 END TEST fio_dif_digest 00:46:02.700 ************************************ 00:46:02.700 05:39:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:02.700 05:39:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:02.700 rmmod nvme_tcp 00:46:02.700 rmmod nvme_fabrics 00:46:02.700 rmmod nvme_keyring 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1945602 ']' 00:46:02.700 05:39:16 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1945602 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1945602 ']' 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1945602 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1945602 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1945602' 00:46:02.700 killing process with pid 1945602 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1945602 00:46:02.700 05:39:16 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1945602 00:46:02.960 05:39:16 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:02.960 05:39:16 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:06.264 Waiting for block devices as requested 00:46:06.526 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:06.526 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:06.526 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:06.786 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:06.786 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:06.786 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:06.786 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:07.047 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:07.047 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:07.308 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:07.308 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:07.308 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:07.569 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:07.569 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:07.569 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:07.830 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:07.830 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:08.091 05:39:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:08.091 05:39:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:08.091 05:39:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:10.642 05:39:24 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:10.642 00:46:10.642 real 1m25.458s 00:46:10.642 user 8m10.882s 00:46:10.642 sys 0m24.437s 00:46:10.642 05:39:24 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:10.642 05:39:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:10.642 ************************************ 00:46:10.642 END TEST nvmf_dif 00:46:10.642 ************************************ 00:46:10.642 05:39:24 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:10.642 05:39:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:10.642 05:39:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:10.642 05:39:24 -- common/autotest_common.sh@10 -- # set +x 00:46:10.642 ************************************ 00:46:10.642 START TEST nvmf_abort_qd_sizes 00:46:10.642 ************************************ 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:10.642 * Looking for test storage... 00:46:10.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.642 --rc genhtml_branch_coverage=1 00:46:10.642 --rc genhtml_function_coverage=1 00:46:10.642 --rc genhtml_legend=1 00:46:10.642 --rc geninfo_all_blocks=1 00:46:10.642 --rc geninfo_unexecuted_blocks=1 00:46:10.642 00:46:10.642 ' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.642 --rc genhtml_branch_coverage=1 00:46:10.642 --rc genhtml_function_coverage=1 00:46:10.642 --rc genhtml_legend=1 00:46:10.642 --rc geninfo_all_blocks=1 00:46:10.642 --rc geninfo_unexecuted_blocks=1 00:46:10.642 00:46:10.642 ' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.642 --rc genhtml_branch_coverage=1 00:46:10.642 --rc genhtml_function_coverage=1 00:46:10.642 --rc genhtml_legend=1 00:46:10.642 --rc geninfo_all_blocks=1 00:46:10.642 --rc geninfo_unexecuted_blocks=1 00:46:10.642 00:46:10.642 ' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.642 --rc genhtml_branch_coverage=1 00:46:10.642 --rc genhtml_function_coverage=1 00:46:10.642 --rc genhtml_legend=1 00:46:10.642 --rc geninfo_all_blocks=1 00:46:10.642 --rc geninfo_unexecuted_blocks=1 00:46:10.642 00:46:10.642 ' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:10.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:10.642 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:10.643 05:39:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:18.785 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:18.786 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:18.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:18.786 Found net devices under 0000:31:00.0: cvl_0_0 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:18.786 Found net devices under 0000:31:00.1: cvl_0_1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:18.786 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:18.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:18.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:46:18.786 00:46:18.786 --- 10.0.0.2 ping statistics --- 00:46:18.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:18.786 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:46:18.787 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:18.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:18.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:46:18.787 00:46:18.787 --- 10.0.0.1 ping statistics --- 00:46:18.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:18.787 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:46:18.787 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:18.787 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:18.787 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:18.787 05:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:21.330 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:21.330 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:21.589 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:21.849 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1967003 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1967003 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1967003 ']' 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:22.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:22.109 05:39:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:22.109 [2024-12-09 05:39:35.955866] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:46:22.109 [2024-12-09 05:39:35.955972] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:22.369 [2024-12-09 05:39:36.105809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:22.369 [2024-12-09 05:39:36.206433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:22.369 [2024-12-09 05:39:36.206483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:22.369 [2024-12-09 05:39:36.206495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:22.369 [2024-12-09 05:39:36.206506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:22.369 [2024-12-09 05:39:36.206515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:22.369 [2024-12-09 05:39:36.208804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:22.369 [2024-12-09 05:39:36.208950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:22.369 [2024-12-09 05:39:36.209211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:22.369 [2024-12-09 05:39:36.209230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:22.939 05:39:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:22.939 ************************************ 00:46:22.939 START TEST spdk_target_abort 00:46:22.939 ************************************ 00:46:22.939 05:39:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:22.939 05:39:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:22.939 05:39:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:22.939 05:39:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:22.939 05:39:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:23.199 spdk_targetn1 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:23.459 [2024-12-09 05:39:37.199429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:23.459 [2024-12-09 05:39:37.245634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:23.459 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:23.460 05:39:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:23.720 [2024-12-09 05:39:37.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:720 len:8 PRP1 0x200004ac5000 PRP2 0x0 00:46:23.720 [2024-12-09 05:39:37.470259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:46:23.720 [2024-12-09 05:39:37.483531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1008 len:8 PRP1 0x200004ac1000 PRP2 0x0 00:46:23.720 [2024-12-09 05:39:37.483566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0081 p:1 m:0 dnr:0 00:46:23.720 [2024-12-09 05:39:37.569472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3464 len:8 PRP1 0x200004abd000 PRP2 0x0 00:46:23.720 [2024-12-09 05:39:37.569503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b5 p:0 m:0 dnr:0 00:46:27.014 Initializing NVMe Controllers 00:46:27.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:27.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:27.014 Initialization complete. Launching workers. 00:46:27.014 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10576, failed: 3 00:46:27.014 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2278, failed to submit 8301 00:46:27.014 success 734, unsuccessful 1544, failed 0 00:46:27.014 05:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:27.014 05:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:27.014 [2024-12-09 05:39:40.776904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1080 len:8 PRP1 0x200004e47000 PRP2 0x0 00:46:27.014 [2024-12-09 05:39:40.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:008c p:1 m:0 dnr:0 00:46:27.014 [2024-12-09 05:39:40.857112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2912 len:8 PRP1 0x200004e5d000 PRP2 0x0 00:46:27.014 [2024-12-09 05:39:40.857153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:46:27.014 [2024-12-09 05:39:40.873150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:3360 len:8 PRP1 0x200004e5d000 PRP2 0x0 00:46:27.014 [2024-12-09 05:39:40.873182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:46:27.014 [2024-12-09 05:39:40.897125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3904 len:8 PRP1 0x200004e41000 PRP2 0x0 00:46:27.014 [2024-12-09 05:39:40.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:46:30.317 Initializing NVMe Controllers 00:46:30.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:30.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:30.317 Initialization complete. Launching workers. 00:46:30.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8720, failed: 4 00:46:30.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7495 00:46:30.317 success 338, unsuccessful 891, failed 0 00:46:30.317 05:39:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:30.317 05:39:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:33.620 Initializing NVMe Controllers 00:46:33.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:33.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:33.620 Initialization complete. Launching workers. 00:46:33.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39626, failed: 0 00:46:33.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2827, failed to submit 36799 00:46:33.620 success 637, unsuccessful 2190, failed 0 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.620 05:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1967003 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1967003 ']' 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1967003 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967003 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967003' 00:46:35.538 killing process with pid 1967003 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1967003 00:46:35.538 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1967003 00:46:35.799 00:46:35.799 real 0m12.713s 00:46:35.799 user 0m50.943s 00:46:35.799 sys 0m2.180s 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:35.799 ************************************ 00:46:35.799 END TEST spdk_target_abort 00:46:35.799 ************************************ 00:46:35.799 05:39:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:35.799 05:39:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:35.799 05:39:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:35.799 05:39:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:35.799 ************************************ 00:46:35.799 START TEST kernel_target_abort 00:46:35.799 ************************************ 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:35.799 05:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:39.100 Waiting for block devices as requested 00:46:39.100 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:39.100 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:39.360 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:39.360 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:39.360 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:39.619 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:39.619 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:39.619 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:39.880 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:39.880 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:40.141 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:40.141 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:40.141 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:40.402 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:40.402 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:40.402 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:40.402 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:41.343 No valid GPT data, bailing 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:41.343 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:46:41.603 00:46:41.603 Discovery Log Number of Records 2, Generation counter 2 00:46:41.603 =====Discovery Log Entry 0====== 00:46:41.603 trtype: tcp 00:46:41.603 adrfam: ipv4 00:46:41.603 subtype: current discovery subsystem 00:46:41.603 treq: not specified, sq flow control disable supported 00:46:41.603 portid: 1 00:46:41.603 trsvcid: 4420 00:46:41.603 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:41.603 traddr: 10.0.0.1 00:46:41.603 eflags: none 00:46:41.603 sectype: none 00:46:41.603 =====Discovery Log Entry 1====== 00:46:41.603 trtype: tcp 00:46:41.603 adrfam: ipv4 00:46:41.603 subtype: nvme subsystem 00:46:41.603 treq: not specified, sq flow control disable supported 00:46:41.603 portid: 1 00:46:41.603 trsvcid: 4420 00:46:41.603 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:41.603 traddr: 10.0.0.1 00:46:41.603 eflags: none 00:46:41.603 sectype: none 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:41.603 05:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:44.905 Initializing NVMe Controllers 00:46:44.905 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:44.905 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:44.905 Initialization complete. Launching workers. 00:46:44.905 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60756, failed: 0 00:46:44.905 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 60756, failed to submit 0 00:46:44.905 success 0, unsuccessful 60756, failed 0 00:46:44.905 05:39:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:44.905 05:39:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:48.205 Initializing NVMe Controllers 00:46:48.205 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:48.205 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:48.205 Initialization complete. Launching workers. 00:46:48.205 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104723, failed: 0 00:46:48.205 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26378, failed to submit 78345 00:46:48.205 success 0, unsuccessful 26378, failed 0 00:46:48.205 05:40:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:48.205 05:40:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:51.499 Initializing NVMe Controllers 00:46:51.499 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:51.499 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:51.499 Initialization complete. Launching workers. 00:46:51.499 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131820, failed: 0 00:46:51.499 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32998, failed to submit 98822 00:46:51.499 success 0, unsuccessful 32998, failed 0 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:51.499 05:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:51.499 05:40:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:51.499 05:40:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:51.499 05:40:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:51.499 05:40:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:54.793 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:54.793 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:54.794 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:54.794 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:56.706 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:56.966 00:46:56.966 real 0m21.082s 00:46:56.966 user 0m10.201s 00:46:56.966 sys 0m6.607s 00:46:56.966 05:40:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:56.966 05:40:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:56.966 ************************************ 00:46:56.966 END TEST kernel_target_abort 00:46:56.966 ************************************ 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:56.966 rmmod nvme_tcp 00:46:56.966 rmmod nvme_fabrics 00:46:56.966 rmmod nvme_keyring 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1967003 ']' 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1967003 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1967003 ']' 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1967003 00:46:56.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1967003) - No such process 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1967003 is not found' 00:46:56.966 Process with pid 1967003 is not found 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:56.966 05:40:10 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:00.370 Waiting for block devices as requested 00:47:00.370 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:00.370 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:00.370 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:00.630 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:00.630 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:00.630 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:00.891 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:00.891 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:00.891 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:01.152 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:01.152 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:01.412 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:01.412 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:01.412 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:01.687 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:01.687 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:01.687 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:01.948 05:40:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:04.494 05:40:17 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:04.494 00:47:04.494 real 0m53.791s 00:47:04.494 user 1m6.615s 00:47:04.494 sys 0m19.929s 00:47:04.494 05:40:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:04.494 05:40:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:04.494 ************************************ 00:47:04.494 END TEST nvmf_abort_qd_sizes 00:47:04.494 ************************************ 00:47:04.494 05:40:18 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:04.494 05:40:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:04.494 05:40:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:04.494 05:40:18 -- common/autotest_common.sh@10 -- # set +x 00:47:04.494 ************************************ 00:47:04.494 START TEST keyring_file 00:47:04.494 ************************************ 00:47:04.494 05:40:18 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:04.494 * Looking for test storage... 00:47:04.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:04.494 05:40:18 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:04.494 05:40:18 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:47:04.494 05:40:18 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:04.494 05:40:18 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@345 -- # : 1 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@353 -- # local d=1 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:04.494 05:40:18 keyring_file -- scripts/common.sh@355 -- # echo 1 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@353 -- # local d=2 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@355 -- # echo 2 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@368 -- # return 0 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.495 --rc genhtml_branch_coverage=1 00:47:04.495 --rc genhtml_function_coverage=1 00:47:04.495 --rc genhtml_legend=1 00:47:04.495 --rc geninfo_all_blocks=1 00:47:04.495 --rc geninfo_unexecuted_blocks=1 00:47:04.495 00:47:04.495 ' 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.495 --rc genhtml_branch_coverage=1 00:47:04.495 --rc genhtml_function_coverage=1 00:47:04.495 --rc genhtml_legend=1 00:47:04.495 --rc geninfo_all_blocks=1 00:47:04.495 --rc geninfo_unexecuted_blocks=1 00:47:04.495 00:47:04.495 ' 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.495 --rc genhtml_branch_coverage=1 00:47:04.495 --rc genhtml_function_coverage=1 00:47:04.495 --rc genhtml_legend=1 00:47:04.495 --rc geninfo_all_blocks=1 00:47:04.495 --rc geninfo_unexecuted_blocks=1 00:47:04.495 00:47:04.495 ' 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.495 --rc genhtml_branch_coverage=1 00:47:04.495 --rc genhtml_function_coverage=1 00:47:04.495 --rc genhtml_legend=1 00:47:04.495 --rc geninfo_all_blocks=1 00:47:04.495 --rc geninfo_unexecuted_blocks=1 00:47:04.495 00:47:04.495 ' 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:04.495 05:40:18 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:04.495 05:40:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.495 05:40:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.495 05:40:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.495 05:40:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:04.495 05:40:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@51 -- # : 0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:04.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.49IIsG40C5 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.49IIsG40C5 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.49IIsG40C5 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.49IIsG40C5 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3uFfr1eHcr 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:04.495 05:40:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3uFfr1eHcr 00:47:04.495 05:40:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3uFfr1eHcr 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3uFfr1eHcr 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=1977757 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1977757 00:47:04.495 05:40:18 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1977757 ']' 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:04.495 05:40:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:04.496 05:40:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:04.496 05:40:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:04.496 05:40:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:04.757 [2024-12-09 05:40:18.534207] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:04.757 [2024-12-09 05:40:18.534348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977757 ] 00:47:04.757 [2024-12-09 05:40:18.690814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.019 [2024-12-09 05:40:18.812189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:05.593 05:40:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.593 [2024-12-09 05:40:19.525537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:05.593 null0 00:47:05.593 [2024-12-09 05:40:19.557561] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:05.593 [2024-12-09 05:40:19.558183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:05.593 05:40:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:05.593 05:40:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.854 [2024-12-09 05:40:19.589609] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:05.854 request: 00:47:05.854 { 00:47:05.854 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:05.854 "secure_channel": false, 00:47:05.854 "listen_address": { 00:47:05.854 "trtype": "tcp", 00:47:05.854 "traddr": "127.0.0.1", 00:47:05.854 "trsvcid": "4420" 00:47:05.854 }, 00:47:05.854 "method": "nvmf_subsystem_add_listener", 00:47:05.854 "req_id": 1 00:47:05.854 } 00:47:05.854 Got JSON-RPC error response 00:47:05.854 response: 00:47:05.854 { 00:47:05.854 "code": -32602, 00:47:05.854 "message": "Invalid parameters" 00:47:05.854 } 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:05.854 05:40:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=1977890 00:47:05.854 05:40:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1977890 /var/tmp/bperf.sock 00:47:05.854 05:40:19 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1977890 ']' 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:05.854 05:40:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.854 [2024-12-09 05:40:19.685709] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:05.854 [2024-12-09 05:40:19.685837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977890 ] 00:47:05.854 [2024-12-09 05:40:19.839840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.115 [2024-12-09 05:40:19.963435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:06.687 05:40:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:06.687 05:40:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:06.687 05:40:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:06.687 05:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:06.687 05:40:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3uFfr1eHcr 00:47:06.687 05:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3uFfr1eHcr 00:47:06.949 05:40:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:06.949 05:40:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:06.949 05:40:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:06.949 05:40:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:06.949 05:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.212 05:40:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.49IIsG40C5 == \/\t\m\p\/\t\m\p\.\4\9\I\I\s\G\4\0\C\5 ]] 00:47:07.212 05:40:20 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:07.212 05:40:20 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:07.212 05:40:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:07.212 05:40:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.3uFfr1eHcr == \/\t\m\p\/\t\m\p\.\3\u\F\f\r\1\e\H\c\r ]] 00:47:07.212 05:40:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.212 05:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:07.473 05:40:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:07.473 05:40:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:07.473 05:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:07.473 05:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.473 05:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.473 05:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:07.473 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.733 05:40:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:07.733 05:40:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:07.733 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:07.733 [2024-12-09 05:40:21.696678] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:07.994 nvme0n1 00:47:07.994 05:40:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:07.994 05:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.994 05:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:07.994 05:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.994 05:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:07.994 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.994 05:40:21 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:07.995 05:40:21 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:07.995 05:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.995 05:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:07.995 05:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.995 05:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:07.995 05:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:08.255 05:40:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:08.255 05:40:22 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:08.255 Running I/O for 1 seconds... 00:47:09.640 14018.00 IOPS, 54.76 MiB/s 00:47:09.640 Latency(us) 00:47:09.640 [2024-12-09T04:40:23.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:09.640 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:09.640 nvme0n1 : 1.00 14085.74 55.02 0.00 0.00 9069.25 3017.39 17803.95 00:47:09.640 [2024-12-09T04:40:23.637Z] =================================================================================================================== 00:47:09.640 [2024-12-09T04:40:23.637Z] Total : 14085.74 55.02 0.00 0.00 9069.25 3017.39 17803.95 00:47:09.640 { 00:47:09.640 "results": [ 00:47:09.640 { 00:47:09.640 "job": "nvme0n1", 00:47:09.640 "core_mask": "0x2", 00:47:09.640 "workload": "randrw", 00:47:09.640 "percentage": 50, 00:47:09.640 "status": "finished", 00:47:09.640 "queue_depth": 128, 00:47:09.640 "io_size": 4096, 00:47:09.640 "runtime": 1.004349, 00:47:09.640 "iops": 14085.741111904328, 00:47:09.640 "mibps": 55.02242621837628, 00:47:09.640 "io_failed": 0, 00:47:09.640 "io_timeout": 0, 00:47:09.640 "avg_latency_us": 9069.246692585, 00:47:09.640 "min_latency_us": 3017.3866666666668, 00:47:09.640 "max_latency_us": 17803.946666666667 00:47:09.640 } 00:47:09.640 ], 00:47:09.640 "core_count": 1 00:47:09.640 } 00:47:09.640 05:40:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:09.640 05:40:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:09.640 05:40:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:09.640 05:40:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:09.640 05:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:09.901 05:40:23 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:09.902 05:40:23 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:09.902 05:40:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:09.902 05:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:10.163 [2024-12-09 05:40:23.962790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:10.163 [2024-12-09 05:40:23.963412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398a80 (107): Transport endpoint is not connected 00:47:10.163 [2024-12-09 05:40:23.964398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398a80 (9): Bad file descriptor 00:47:10.163 [2024-12-09 05:40:23.965396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:10.163 [2024-12-09 05:40:23.965413] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:10.163 [2024-12-09 05:40:23.965423] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:10.163 [2024-12-09 05:40:23.965432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:10.163 request: 00:47:10.163 { 00:47:10.163 "name": "nvme0", 00:47:10.163 "trtype": "tcp", 00:47:10.163 "traddr": "127.0.0.1", 00:47:10.163 "adrfam": "ipv4", 00:47:10.163 "trsvcid": "4420", 00:47:10.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:10.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:10.163 "prchk_reftag": false, 00:47:10.163 "prchk_guard": false, 00:47:10.163 "hdgst": false, 00:47:10.163 "ddgst": false, 00:47:10.163 "psk": "key1", 00:47:10.163 "allow_unrecognized_csi": false, 00:47:10.163 "method": "bdev_nvme_attach_controller", 00:47:10.163 "req_id": 1 00:47:10.163 } 00:47:10.163 Got JSON-RPC error response 00:47:10.163 response: 00:47:10.163 { 00:47:10.163 "code": -5, 00:47:10.163 "message": "Input/output error" 00:47:10.163 } 00:47:10.163 05:40:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:10.163 05:40:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:10.163 05:40:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:10.163 05:40:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:10.163 05:40:23 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:10.163 05:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:10.163 05:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:10.163 05:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:10.163 05:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:10.163 05:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:10.424 05:40:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:10.424 05:40:24 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:10.424 05:40:24 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:10.424 05:40:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:10.424 05:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:10.685 05:40:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:10.685 05:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:10.685 05:40:24 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:10.685 05:40:24 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:10.685 05:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:10.946 05:40:24 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:10.946 05:40:24 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.49IIsG40C5 00:47:10.946 05:40:24 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:10.946 05:40:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:10.946 05:40:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:10.946 05:40:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:10.946 05:40:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.946 05:40:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:10.947 05:40:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.947 05:40:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:10.947 05:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:11.207 [2024-12-09 05:40:24.987697] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.49IIsG40C5': 0100660 00:47:11.207 [2024-12-09 05:40:24.987729] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:11.207 request: 00:47:11.208 { 00:47:11.208 "name": "key0", 00:47:11.208 "path": "/tmp/tmp.49IIsG40C5", 00:47:11.208 "method": "keyring_file_add_key", 00:47:11.208 "req_id": 1 00:47:11.208 } 00:47:11.208 Got JSON-RPC error response 00:47:11.208 response: 00:47:11.208 { 00:47:11.208 "code": -1, 00:47:11.208 "message": "Operation not permitted" 00:47:11.208 } 00:47:11.208 05:40:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:11.208 05:40:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:11.208 05:40:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:11.208 05:40:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:11.208 05:40:25 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.49IIsG40C5 00:47:11.208 05:40:25 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:11.208 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.49IIsG40C5 00:47:11.469 05:40:25 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.49IIsG40C5 00:47:11.469 05:40:25 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:11.469 05:40:25 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:11.469 05:40:25 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:11.469 05:40:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:11.469 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:11.729 [2024-12-09 05:40:25.561199] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.49IIsG40C5': No such file or directory 00:47:11.729 [2024-12-09 05:40:25.561230] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:11.729 [2024-12-09 05:40:25.561247] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:11.729 [2024-12-09 05:40:25.561256] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:11.729 [2024-12-09 05:40:25.561268] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:11.729 [2024-12-09 05:40:25.561279] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:11.729 request: 00:47:11.729 { 00:47:11.729 "name": "nvme0", 00:47:11.729 "trtype": "tcp", 00:47:11.729 "traddr": "127.0.0.1", 00:47:11.729 "adrfam": "ipv4", 00:47:11.729 "trsvcid": "4420", 00:47:11.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:11.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:11.729 "prchk_reftag": false, 00:47:11.729 "prchk_guard": false, 00:47:11.729 "hdgst": false, 00:47:11.729 "ddgst": false, 00:47:11.729 "psk": "key0", 00:47:11.729 "allow_unrecognized_csi": false, 00:47:11.729 "method": "bdev_nvme_attach_controller", 00:47:11.729 "req_id": 1 00:47:11.729 } 00:47:11.729 Got JSON-RPC error response 00:47:11.729 response: 00:47:11.729 { 00:47:11.729 "code": -19, 00:47:11.729 "message": "No such device" 00:47:11.729 } 00:47:11.729 05:40:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:11.729 05:40:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:11.729 05:40:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:11.729 05:40:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:11.729 05:40:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:11.729 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:11.989 05:40:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:11.989 05:40:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:11.989 05:40:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:11.990 05:40:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Rfs3hnlodZ 00:47:11.990 05:40:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:11.990 05:40:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:12.250 nvme0n1 00:47:12.250 05:40:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:12.250 05:40:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:12.250 05:40:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:12.250 05:40:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:12.250 05:40:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:12.250 05:40:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:12.510 05:40:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:12.510 05:40:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:12.510 05:40:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:12.769 05:40:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:12.769 05:40:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:12.769 05:40:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:12.769 05:40:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:12.769 05:40:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.028 05:40:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:13.028 05:40:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:13.028 05:40:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:13.287 05:40:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:13.287 05:40:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:13.288 05:40:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.288 05:40:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:13.288 05:40:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Rfs3hnlodZ 00:47:13.288 05:40:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Rfs3hnlodZ 00:47:13.557 05:40:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3uFfr1eHcr 00:47:13.558 05:40:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3uFfr1eHcr 00:47:13.817 05:40:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:13.817 05:40:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:13.817 nvme0n1 00:47:14.078 05:40:27 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:14.078 05:40:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:14.078 05:40:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:14.078 "subsystems": [ 00:47:14.078 { 00:47:14.078 "subsystem": "keyring", 00:47:14.078 "config": [ 00:47:14.078 { 00:47:14.078 "method": "keyring_file_add_key", 00:47:14.078 "params": { 00:47:14.078 "name": "key0", 00:47:14.078 "path": "/tmp/tmp.Rfs3hnlodZ" 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "keyring_file_add_key", 00:47:14.078 "params": { 00:47:14.078 "name": "key1", 00:47:14.078 "path": "/tmp/tmp.3uFfr1eHcr" 00:47:14.078 } 00:47:14.078 } 00:47:14.078 ] 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "subsystem": "iobuf", 00:47:14.078 "config": [ 00:47:14.078 { 00:47:14.078 "method": "iobuf_set_options", 00:47:14.078 "params": { 00:47:14.078 "small_pool_count": 8192, 00:47:14.078 "large_pool_count": 1024, 00:47:14.078 "small_bufsize": 8192, 00:47:14.078 "large_bufsize": 135168, 00:47:14.078 "enable_numa": false 00:47:14.078 } 00:47:14.078 } 00:47:14.078 ] 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "subsystem": "sock", 00:47:14.078 "config": [ 00:47:14.078 { 00:47:14.078 "method": "sock_set_default_impl", 00:47:14.078 "params": { 00:47:14.078 "impl_name": "posix" 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "sock_impl_set_options", 00:47:14.078 "params": { 00:47:14.078 "impl_name": "ssl", 00:47:14.078 "recv_buf_size": 4096, 00:47:14.078 "send_buf_size": 4096, 00:47:14.078 "enable_recv_pipe": true, 00:47:14.078 "enable_quickack": false, 00:47:14.078 "enable_placement_id": 0, 00:47:14.078 "enable_zerocopy_send_server": true, 00:47:14.078 "enable_zerocopy_send_client": false, 00:47:14.078 "zerocopy_threshold": 0, 00:47:14.078 "tls_version": 0, 00:47:14.078 "enable_ktls": false 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "sock_impl_set_options", 00:47:14.078 "params": { 00:47:14.078 "impl_name": "posix", 00:47:14.078 "recv_buf_size": 2097152, 00:47:14.078 "send_buf_size": 2097152, 00:47:14.078 "enable_recv_pipe": true, 00:47:14.078 "enable_quickack": false, 00:47:14.078 "enable_placement_id": 0, 00:47:14.078 "enable_zerocopy_send_server": true, 00:47:14.078 "enable_zerocopy_send_client": false, 00:47:14.078 "zerocopy_threshold": 0, 00:47:14.078 "tls_version": 0, 00:47:14.078 "enable_ktls": false 00:47:14.078 } 00:47:14.078 } 00:47:14.078 ] 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "subsystem": "vmd", 00:47:14.078 "config": [] 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "subsystem": "accel", 00:47:14.078 "config": [ 00:47:14.078 { 00:47:14.078 "method": "accel_set_options", 00:47:14.078 "params": { 00:47:14.078 "small_cache_size": 128, 00:47:14.078 "large_cache_size": 16, 00:47:14.078 "task_count": 2048, 00:47:14.078 "sequence_count": 2048, 00:47:14.078 "buf_count": 2048 00:47:14.078 } 00:47:14.078 } 00:47:14.078 ] 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "subsystem": "bdev", 00:47:14.078 "config": [ 00:47:14.078 { 00:47:14.078 "method": "bdev_set_options", 00:47:14.078 "params": { 00:47:14.078 "bdev_io_pool_size": 65535, 00:47:14.078 "bdev_io_cache_size": 256, 00:47:14.078 "bdev_auto_examine": true, 00:47:14.078 "iobuf_small_cache_size": 128, 00:47:14.078 "iobuf_large_cache_size": 16 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "bdev_raid_set_options", 00:47:14.078 "params": { 00:47:14.078 "process_window_size_kb": 1024, 00:47:14.078 "process_max_bandwidth_mb_sec": 0 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "bdev_iscsi_set_options", 00:47:14.078 "params": { 00:47:14.078 "timeout_sec": 30 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "bdev_nvme_set_options", 00:47:14.078 "params": { 00:47:14.078 "action_on_timeout": "none", 00:47:14.078 "timeout_us": 0, 00:47:14.078 "timeout_admin_us": 0, 00:47:14.078 "keep_alive_timeout_ms": 10000, 00:47:14.078 "arbitration_burst": 0, 00:47:14.078 "low_priority_weight": 0, 00:47:14.078 "medium_priority_weight": 0, 00:47:14.078 "high_priority_weight": 0, 00:47:14.078 "nvme_adminq_poll_period_us": 10000, 00:47:14.078 "nvme_ioq_poll_period_us": 0, 00:47:14.078 "io_queue_requests": 512, 00:47:14.078 "delay_cmd_submit": true, 00:47:14.078 "transport_retry_count": 4, 00:47:14.078 "bdev_retry_count": 3, 00:47:14.078 "transport_ack_timeout": 0, 00:47:14.078 "ctrlr_loss_timeout_sec": 0, 00:47:14.078 "reconnect_delay_sec": 0, 00:47:14.078 "fast_io_fail_timeout_sec": 0, 00:47:14.078 "disable_auto_failback": false, 00:47:14.078 "generate_uuids": false, 00:47:14.078 "transport_tos": 0, 00:47:14.078 "nvme_error_stat": false, 00:47:14.078 "rdma_srq_size": 0, 00:47:14.078 "io_path_stat": false, 00:47:14.078 "allow_accel_sequence": false, 00:47:14.078 "rdma_max_cq_size": 0, 00:47:14.078 "rdma_cm_event_timeout_ms": 0, 00:47:14.078 "dhchap_digests": [ 00:47:14.078 "sha256", 00:47:14.078 "sha384", 00:47:14.078 "sha512" 00:47:14.078 ], 00:47:14.078 "dhchap_dhgroups": [ 00:47:14.078 "null", 00:47:14.078 "ffdhe2048", 00:47:14.078 "ffdhe3072", 00:47:14.078 "ffdhe4096", 00:47:14.078 "ffdhe6144", 00:47:14.078 "ffdhe8192" 00:47:14.078 ] 00:47:14.078 } 00:47:14.078 }, 00:47:14.078 { 00:47:14.078 "method": "bdev_nvme_attach_controller", 00:47:14.078 "params": { 00:47:14.078 "name": "nvme0", 00:47:14.078 "trtype": "TCP", 00:47:14.078 "adrfam": "IPv4", 00:47:14.078 "traddr": "127.0.0.1", 00:47:14.078 "trsvcid": "4420", 00:47:14.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:14.078 "prchk_reftag": false, 00:47:14.078 "prchk_guard": false, 00:47:14.078 "ctrlr_loss_timeout_sec": 0, 00:47:14.078 "reconnect_delay_sec": 0, 00:47:14.078 "fast_io_fail_timeout_sec": 0, 00:47:14.078 "psk": "key0", 00:47:14.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:14.079 "hdgst": false, 00:47:14.079 "ddgst": false, 00:47:14.079 "multipath": "multipath" 00:47:14.079 } 00:47:14.079 }, 00:47:14.079 { 00:47:14.079 "method": "bdev_nvme_set_hotplug", 00:47:14.079 "params": { 00:47:14.079 "period_us": 100000, 00:47:14.079 "enable": false 00:47:14.079 } 00:47:14.079 }, 00:47:14.079 { 00:47:14.079 "method": "bdev_wait_for_examine" 00:47:14.079 } 00:47:14.079 ] 00:47:14.079 }, 00:47:14.079 { 00:47:14.079 "subsystem": "nbd", 00:47:14.079 "config": [] 00:47:14.079 } 00:47:14.079 ] 00:47:14.079 }' 00:47:14.079 05:40:28 keyring_file -- keyring/file.sh@115 -- # killprocess 1977890 00:47:14.079 05:40:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1977890 ']' 00:47:14.079 05:40:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1977890 00:47:14.079 05:40:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:14.079 05:40:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:14.079 05:40:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977890 00:47:14.342 05:40:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:14.342 05:40:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:14.342 05:40:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977890' 00:47:14.342 killing process with pid 1977890 00:47:14.342 05:40:28 keyring_file -- common/autotest_common.sh@973 -- # kill 1977890 00:47:14.342 Received shutdown signal, test time was about 1.000000 seconds 00:47:14.342 00:47:14.342 Latency(us) 00:47:14.342 [2024-12-09T04:40:28.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:14.342 [2024-12-09T04:40:28.339Z] =================================================================================================================== 00:47:14.342 [2024-12-09T04:40:28.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:14.342 05:40:28 keyring_file -- common/autotest_common.sh@978 -- # wait 1977890 00:47:14.602 05:40:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=1979707 00:47:14.602 05:40:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1979707 /var/tmp/bperf.sock 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1979707 ']' 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:14.602 05:40:28 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:14.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:14.602 05:40:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:14.602 05:40:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:14.602 "subsystems": [ 00:47:14.602 { 00:47:14.602 "subsystem": "keyring", 00:47:14.602 "config": [ 00:47:14.602 { 00:47:14.602 "method": "keyring_file_add_key", 00:47:14.602 "params": { 00:47:14.602 "name": "key0", 00:47:14.602 "path": "/tmp/tmp.Rfs3hnlodZ" 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "keyring_file_add_key", 00:47:14.602 "params": { 00:47:14.602 "name": "key1", 00:47:14.602 "path": "/tmp/tmp.3uFfr1eHcr" 00:47:14.602 } 00:47:14.602 } 00:47:14.602 ] 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "subsystem": "iobuf", 00:47:14.602 "config": [ 00:47:14.602 { 00:47:14.602 "method": "iobuf_set_options", 00:47:14.602 "params": { 00:47:14.602 "small_pool_count": 8192, 00:47:14.602 "large_pool_count": 1024, 00:47:14.602 "small_bufsize": 8192, 00:47:14.602 "large_bufsize": 135168, 00:47:14.602 "enable_numa": false 00:47:14.602 } 00:47:14.602 } 00:47:14.602 ] 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "subsystem": "sock", 00:47:14.602 "config": [ 00:47:14.602 { 00:47:14.602 "method": "sock_set_default_impl", 00:47:14.602 "params": { 00:47:14.602 "impl_name": "posix" 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "sock_impl_set_options", 00:47:14.602 "params": { 00:47:14.602 "impl_name": "ssl", 00:47:14.602 "recv_buf_size": 4096, 00:47:14.602 "send_buf_size": 4096, 00:47:14.602 "enable_recv_pipe": true, 00:47:14.602 "enable_quickack": false, 00:47:14.602 "enable_placement_id": 0, 00:47:14.602 "enable_zerocopy_send_server": true, 00:47:14.602 "enable_zerocopy_send_client": false, 00:47:14.602 "zerocopy_threshold": 0, 00:47:14.602 "tls_version": 0, 00:47:14.602 "enable_ktls": false 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "sock_impl_set_options", 00:47:14.602 "params": { 00:47:14.602 "impl_name": "posix", 00:47:14.602 "recv_buf_size": 2097152, 00:47:14.602 "send_buf_size": 2097152, 00:47:14.602 "enable_recv_pipe": true, 00:47:14.602 "enable_quickack": false, 00:47:14.602 "enable_placement_id": 0, 00:47:14.602 "enable_zerocopy_send_server": true, 00:47:14.602 "enable_zerocopy_send_client": false, 00:47:14.602 "zerocopy_threshold": 0, 00:47:14.602 "tls_version": 0, 00:47:14.602 "enable_ktls": false 00:47:14.602 } 00:47:14.602 } 00:47:14.602 ] 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "subsystem": "vmd", 00:47:14.602 "config": [] 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "subsystem": "accel", 00:47:14.602 "config": [ 00:47:14.602 { 00:47:14.602 "method": "accel_set_options", 00:47:14.602 "params": { 00:47:14.602 "small_cache_size": 128, 00:47:14.602 "large_cache_size": 16, 00:47:14.602 "task_count": 2048, 00:47:14.602 "sequence_count": 2048, 00:47:14.602 "buf_count": 2048 00:47:14.602 } 00:47:14.602 } 00:47:14.602 ] 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "subsystem": "bdev", 00:47:14.602 "config": [ 00:47:14.602 { 00:47:14.602 "method": "bdev_set_options", 00:47:14.602 "params": { 00:47:14.602 "bdev_io_pool_size": 65535, 00:47:14.602 "bdev_io_cache_size": 256, 00:47:14.602 "bdev_auto_examine": true, 00:47:14.602 "iobuf_small_cache_size": 128, 00:47:14.602 "iobuf_large_cache_size": 16 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "bdev_raid_set_options", 00:47:14.602 "params": { 00:47:14.602 "process_window_size_kb": 1024, 00:47:14.602 "process_max_bandwidth_mb_sec": 0 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "bdev_iscsi_set_options", 00:47:14.602 "params": { 00:47:14.602 "timeout_sec": 30 00:47:14.602 } 00:47:14.602 }, 00:47:14.602 { 00:47:14.602 "method": "bdev_nvme_set_options", 00:47:14.602 "params": { 00:47:14.602 "action_on_timeout": "none", 00:47:14.602 "timeout_us": 0, 00:47:14.602 "timeout_admin_us": 0, 00:47:14.602 "keep_alive_timeout_ms": 10000, 00:47:14.602 "arbitration_burst": 0, 00:47:14.602 "low_priority_weight": 0, 00:47:14.602 "medium_priority_weight": 0, 00:47:14.602 "high_priority_weight": 0, 00:47:14.602 "nvme_adminq_poll_period_us": 10000, 00:47:14.602 "nvme_ioq_poll_period_us": 0, 00:47:14.602 "io_queue_requests": 512, 00:47:14.602 "delay_cmd_submit": true, 00:47:14.602 "transport_retry_count": 4, 00:47:14.602 "bdev_retry_count": 3, 00:47:14.602 "transport_ack_timeout": 0, 00:47:14.602 "ctrlr_loss_timeout_sec": 0, 00:47:14.602 "reconnect_delay_sec": 0, 00:47:14.602 "fast_io_fail_timeout_sec": 0, 00:47:14.602 "disable_auto_failback": false, 00:47:14.602 "generate_uuids": false, 00:47:14.602 "transport_tos": 0, 00:47:14.602 "nvme_error_stat": false, 00:47:14.602 "rdma_srq_size": 0, 00:47:14.602 "io_path_stat": false, 00:47:14.602 "allow_accel_sequence": false, 00:47:14.602 "rdma_max_cq_size": 0, 00:47:14.602 "rdma_cm_event_timeout_ms": 0, 00:47:14.602 "dhchap_digests": [ 00:47:14.602 "sha256", 00:47:14.602 "sha384", 00:47:14.602 "sha512" 00:47:14.602 ], 00:47:14.602 "dhchap_dhgroups": [ 00:47:14.602 "null", 00:47:14.602 "ffdhe2048", 00:47:14.602 "ffdhe3072", 00:47:14.602 "ffdhe4096", 00:47:14.602 "ffdhe6144", 00:47:14.602 "ffdhe8192" 00:47:14.603 ] 00:47:14.603 } 00:47:14.603 }, 00:47:14.603 { 00:47:14.603 "method": "bdev_nvme_attach_controller", 00:47:14.603 "params": { 00:47:14.603 "name": "nvme0", 00:47:14.603 "trtype": "TCP", 00:47:14.603 "adrfam": "IPv4", 00:47:14.603 "traddr": "127.0.0.1", 00:47:14.603 "trsvcid": "4420", 00:47:14.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:14.603 "prchk_reftag": false, 00:47:14.603 "prchk_guard": false, 00:47:14.603 "ctrlr_loss_timeout_sec": 0, 00:47:14.603 "reconnect_delay_sec": 0, 00:47:14.603 "fast_io_fail_timeout_sec": 0, 00:47:14.603 "psk": "key0", 00:47:14.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:14.603 "hdgst": false, 00:47:14.603 "ddgst": false, 00:47:14.603 "multipath": "multipath" 00:47:14.603 } 00:47:14.603 }, 00:47:14.603 { 00:47:14.603 "method": "bdev_nvme_set_hotplug", 00:47:14.603 "params": { 00:47:14.603 "period_us": 100000, 00:47:14.603 "enable": false 00:47:14.603 } 00:47:14.603 }, 00:47:14.603 { 00:47:14.603 "method": "bdev_wait_for_examine" 00:47:14.603 } 00:47:14.603 ] 00:47:14.603 }, 00:47:14.603 { 00:47:14.603 "subsystem": "nbd", 00:47:14.603 "config": [] 00:47:14.603 } 00:47:14.603 ] 00:47:14.603 }' 00:47:14.863 [2024-12-09 05:40:28.643141] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:14.863 [2024-12-09 05:40:28.643247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979707 ] 00:47:14.863 [2024-12-09 05:40:28.774895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:14.863 [2024-12-09 05:40:28.849826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:15.433 [2024-12-09 05:40:29.120684] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:15.433 05:40:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:15.433 05:40:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:15.433 05:40:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:15.433 05:40:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:15.433 05:40:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.693 05:40:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:15.693 05:40:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:15.693 05:40:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:15.693 05:40:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.693 05:40:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.693 05:40:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:15.693 05:40:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.954 05:40:29 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:15.954 05:40:29 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:15.954 05:40:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:15.954 05:40:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.954 05:40:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.954 05:40:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:15.954 05:40:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.954 05:40:29 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:15.954 05:40:29 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:16.214 05:40:29 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:16.214 05:40:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:16.214 05:40:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:16.214 05:40:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:16.214 05:40:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Rfs3hnlodZ /tmp/tmp.3uFfr1eHcr 00:47:16.214 05:40:30 keyring_file -- keyring/file.sh@20 -- # killprocess 1979707 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1979707 ']' 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1979707 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979707 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979707' 00:47:16.214 killing process with pid 1979707 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@973 -- # kill 1979707 00:47:16.214 Received shutdown signal, test time was about 1.000000 seconds 00:47:16.214 00:47:16.214 Latency(us) 00:47:16.214 [2024-12-09T04:40:30.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:16.214 [2024-12-09T04:40:30.211Z] =================================================================================================================== 00:47:16.214 [2024-12-09T04:40:30.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:16.214 05:40:30 keyring_file -- common/autotest_common.sh@978 -- # wait 1979707 00:47:16.784 05:40:30 keyring_file -- keyring/file.sh@21 -- # killprocess 1977757 00:47:16.784 05:40:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1977757 ']' 00:47:16.784 05:40:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1977757 00:47:16.784 05:40:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:16.784 05:40:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:16.784 05:40:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977757 00:47:16.785 05:40:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:16.785 05:40:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:16.785 05:40:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977757' 00:47:16.785 killing process with pid 1977757 00:47:16.785 05:40:30 keyring_file -- common/autotest_common.sh@973 -- # kill 1977757 00:47:16.785 05:40:30 keyring_file -- common/autotest_common.sh@978 -- # wait 1977757 00:47:18.169 00:47:18.169 real 0m13.800s 00:47:18.169 user 0m30.826s 00:47:18.169 sys 0m3.014s 00:47:18.169 05:40:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:18.169 05:40:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:18.169 ************************************ 00:47:18.169 END TEST keyring_file 00:47:18.169 ************************************ 00:47:18.169 05:40:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:18.169 05:40:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:18.169 05:40:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:18.169 05:40:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:18.169 05:40:31 -- common/autotest_common.sh@10 -- # set +x 00:47:18.169 ************************************ 00:47:18.169 START TEST keyring_linux 00:47:18.169 ************************************ 00:47:18.169 05:40:31 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:18.169 Joined session keyring: 859382444 00:47:18.169 * Looking for test storage... 00:47:18.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:18.169 05:40:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.169 --rc genhtml_branch_coverage=1 00:47:18.169 --rc genhtml_function_coverage=1 00:47:18.169 --rc genhtml_legend=1 00:47:18.169 --rc geninfo_all_blocks=1 00:47:18.169 --rc geninfo_unexecuted_blocks=1 00:47:18.169 00:47:18.169 ' 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.169 --rc genhtml_branch_coverage=1 00:47:18.169 --rc genhtml_function_coverage=1 00:47:18.169 --rc genhtml_legend=1 00:47:18.169 --rc geninfo_all_blocks=1 00:47:18.169 --rc geninfo_unexecuted_blocks=1 00:47:18.169 00:47:18.169 ' 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.169 --rc genhtml_branch_coverage=1 00:47:18.169 --rc genhtml_function_coverage=1 00:47:18.169 --rc genhtml_legend=1 00:47:18.169 --rc geninfo_all_blocks=1 00:47:18.169 --rc geninfo_unexecuted_blocks=1 00:47:18.169 00:47:18.169 ' 00:47:18.169 05:40:32 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.169 --rc genhtml_branch_coverage=1 00:47:18.169 --rc genhtml_function_coverage=1 00:47:18.169 --rc genhtml_legend=1 00:47:18.169 --rc geninfo_all_blocks=1 00:47:18.169 --rc geninfo_unexecuted_blocks=1 00:47:18.169 00:47:18.169 ' 00:47:18.169 05:40:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:18.169 05:40:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:18.430 05:40:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:18.430 05:40:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:18.430 05:40:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:18.430 05:40:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:18.430 05:40:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:18.431 05:40:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.431 05:40:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.431 05:40:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.431 05:40:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:18.431 05:40:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:18.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:18.431 /tmp/:spdk-test:key0 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:18.431 05:40:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:18.431 05:40:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:18.431 /tmp/:spdk-test:key1 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1980477 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1980477 00:47:18.431 05:40:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1980477 ']' 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:18.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:18.431 05:40:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:18.431 [2024-12-09 05:40:32.368718] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:18.431 [2024-12-09 05:40:32.368843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980477 ] 00:47:18.691 [2024-12-09 05:40:32.517588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:18.691 [2024-12-09 05:40:32.590526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:19.262 [2024-12-09 05:40:33.125127] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:19.262 null0 00:47:19.262 [2024-12-09 05:40:33.157162] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:19.262 [2024-12-09 05:40:33.157563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:19.262 71650529 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:19.262 579149034 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1980633 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1980633 /var/tmp/bperf.sock 00:47:19.262 05:40:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1980633 ']' 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:19.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:19.262 05:40:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:19.522 [2024-12-09 05:40:33.263549] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:19.522 [2024-12-09 05:40:33.263655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980633 ] 00:47:19.522 [2024-12-09 05:40:33.393937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:19.522 [2024-12-09 05:40:33.468500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:20.093 05:40:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:20.093 05:40:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:20.093 05:40:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:20.093 05:40:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:20.353 05:40:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:20.353 05:40:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:20.613 05:40:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:20.613 05:40:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:20.874 [2024-12-09 05:40:34.699566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:20.874 nvme0n1 00:47:20.874 05:40:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:20.874 05:40:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:20.874 05:40:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:20.874 05:40:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:20.874 05:40:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:20.874 05:40:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.134 05:40:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:21.134 05:40:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:21.134 05:40:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:21.134 05:40:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:21.134 05:40:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:21.134 05:40:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:21.134 05:40:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@25 -- # sn=71650529 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 71650529 == \7\1\6\5\0\5\2\9 ]] 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 71650529 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:21.393 05:40:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:21.393 Running I/O for 1 seconds... 00:47:22.332 19880.00 IOPS, 77.66 MiB/s 00:47:22.332 Latency(us) 00:47:22.332 [2024-12-09T04:40:36.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:22.332 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:22.332 nvme0n1 : 1.01 19881.58 77.66 0.00 0.00 6415.17 5133.65 15291.73 00:47:22.332 [2024-12-09T04:40:36.329Z] =================================================================================================================== 00:47:22.332 [2024-12-09T04:40:36.329Z] Total : 19881.58 77.66 0.00 0.00 6415.17 5133.65 15291.73 00:47:22.332 { 00:47:22.332 "results": [ 00:47:22.332 { 00:47:22.332 "job": "nvme0n1", 00:47:22.332 "core_mask": "0x2", 00:47:22.332 "workload": "randread", 00:47:22.332 "status": "finished", 00:47:22.332 "queue_depth": 128, 00:47:22.332 "io_size": 4096, 00:47:22.332 "runtime": 1.006409, 00:47:22.332 "iops": 19881.578960442523, 00:47:22.332 "mibps": 77.6624178142286, 00:47:22.332 "io_failed": 0, 00:47:22.332 "io_timeout": 0, 00:47:22.332 "avg_latency_us": 6415.16697752678, 00:47:22.332 "min_latency_us": 5133.653333333334, 00:47:22.332 "max_latency_us": 15291.733333333334 00:47:22.332 } 00:47:22.332 ], 00:47:22.332 "core_count": 1 00:47:22.332 } 00:47:22.332 05:40:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:22.332 05:40:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:22.593 05:40:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:22.593 05:40:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:22.593 05:40:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:22.593 05:40:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:22.593 05:40:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:22.593 05:40:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:22.854 05:40:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:22.854 05:40:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:22.854 05:40:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:22.854 05:40:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.854 05:40:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:22.854 05:40:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:22.854 [2024-12-09 05:40:36.838191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:22.854 [2024-12-09 05:40:36.839093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398a80 (107): Transport endpoint is not connected 00:47:22.854 [2024-12-09 05:40:36.840075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000398a80 (9): Bad file descriptor 00:47:22.854 [2024-12-09 05:40:36.841074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:22.854 [2024-12-09 05:40:36.841098] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:22.854 [2024-12-09 05:40:36.841108] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:22.854 [2024-12-09 05:40:36.841117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:22.854 request: 00:47:22.854 { 00:47:22.854 "name": "nvme0", 00:47:22.854 "trtype": "tcp", 00:47:22.854 "traddr": "127.0.0.1", 00:47:22.854 "adrfam": "ipv4", 00:47:22.854 "trsvcid": "4420", 00:47:22.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:22.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:22.854 "prchk_reftag": false, 00:47:22.854 "prchk_guard": false, 00:47:22.854 "hdgst": false, 00:47:22.854 "ddgst": false, 00:47:22.854 "psk": ":spdk-test:key1", 00:47:22.854 "allow_unrecognized_csi": false, 00:47:22.854 "method": "bdev_nvme_attach_controller", 00:47:22.854 "req_id": 1 00:47:22.854 } 00:47:22.854 Got JSON-RPC error response 00:47:22.854 response: 00:47:22.854 { 00:47:22.854 "code": -5, 00:47:22.854 "message": "Input/output error" 00:47:22.854 } 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@33 -- # sn=71650529 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 71650529 00:47:23.115 1 links removed 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@33 -- # sn=579149034 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 579149034 00:47:23.115 1 links removed 00:47:23.115 05:40:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1980633 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1980633 ']' 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1980633 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980633 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980633' 00:47:23.115 killing process with pid 1980633 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 1980633 00:47:23.115 Received shutdown signal, test time was about 1.000000 seconds 00:47:23.115 00:47:23.115 Latency(us) 00:47:23.115 [2024-12-09T04:40:37.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:23.115 [2024-12-09T04:40:37.112Z] =================================================================================================================== 00:47:23.115 [2024-12-09T04:40:37.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:23.115 05:40:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 1980633 00:47:23.684 05:40:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1980477 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1980477 ']' 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1980477 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980477 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980477' 00:47:23.684 killing process with pid 1980477 00:47:23.684 05:40:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 1980477 00:47:23.685 05:40:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 1980477 00:47:25.070 00:47:25.070 real 0m6.674s 00:47:25.070 user 0m11.575s 00:47:25.070 sys 0m1.565s 00:47:25.070 05:40:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:25.070 05:40:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:25.070 ************************************ 00:47:25.070 END TEST keyring_linux 00:47:25.070 ************************************ 00:47:25.070 05:40:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:25.070 05:40:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:25.070 05:40:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:25.070 05:40:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:25.070 05:40:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:25.070 05:40:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:25.070 05:40:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:25.070 05:40:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:25.070 05:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:25.070 05:40:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:25.070 05:40:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:25.070 05:40:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:25.070 05:40:38 -- common/autotest_common.sh@10 -- # set +x 00:47:33.205 INFO: APP EXITING 00:47:33.205 INFO: killing all VMs 00:47:33.205 INFO: killing vhost app 00:47:33.205 INFO: EXIT DONE 00:47:35.755 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:35.755 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:35.755 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:36.016 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:36.016 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:36.016 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:40.216 Cleaning 00:47:40.216 Removing: /var/run/dpdk/spdk0/config 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:40.216 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:40.216 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:40.216 Removing: /var/run/dpdk/spdk1/config 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:40.216 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:40.216 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:40.216 Removing: /var/run/dpdk/spdk2/config 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:40.216 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:40.216 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:40.216 Removing: /var/run/dpdk/spdk3/config 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:40.216 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:40.216 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:40.216 Removing: /var/run/dpdk/spdk4/config 00:47:40.216 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:40.216 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:40.216 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:40.216 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:40.216 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:40.217 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:40.217 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:40.217 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:40.217 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:40.217 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:40.217 Removing: /dev/shm/bdev_svc_trace.1 00:47:40.217 Removing: /dev/shm/nvmf_trace.0 00:47:40.217 Removing: /dev/shm/spdk_tgt_trace.pid1292029 00:47:40.217 Removing: /var/run/dpdk/spdk0 00:47:40.217 Removing: /var/run/dpdk/spdk1 00:47:40.217 Removing: /var/run/dpdk/spdk2 00:47:40.217 Removing: /var/run/dpdk/spdk3 00:47:40.217 Removing: /var/run/dpdk/spdk4 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1185284 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1289422 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1292029 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1293661 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1294878 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1295400 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1296787 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1296814 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1297505 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1298672 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1299461 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1299949 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1300672 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1301106 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1301818 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1302068 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1302313 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1302666 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1304009 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1307613 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1308305 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1308686 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1308967 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1310068 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1310202 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1311446 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1311459 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1312036 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1312169 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1312541 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1312875 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1313897 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1314118 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1314501 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1319385 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1324774 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1337192 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1338234 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1343741 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1344256 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1349688 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1356991 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1360252 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1373306 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1384566 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1386657 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1387996 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1409962 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1415114 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1516034 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1522610 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1530047 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1542112 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1576929 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1582537 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1584449 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1586730 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1587072 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1587419 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1587769 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1588811 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1591172 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1592591 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1593308 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1596039 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1597066 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1598104 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1603203 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1610104 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1610106 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1610108 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1614998 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1619884 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1626273 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1671012 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1675955 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1683454 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1685568 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1687532 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1689615 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1695359 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1701185 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1706264 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1716470 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1716642 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1721846 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1722091 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1722421 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1722922 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1723015 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1724325 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1726215 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1728131 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1730128 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1732126 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1734126 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1741532 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1742350 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1743556 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1744864 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1751618 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1754814 00:47:40.217 Removing: /var/run/dpdk/spdk_pid1762016 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1768764 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1778889 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1787837 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1787868 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1811374 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1812067 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1813058 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1813766 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1815108 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1815840 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1816537 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1817404 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1822631 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1823084 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1830393 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1830777 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1837317 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1842671 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1854242 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1855117 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1860720 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1861076 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1866394 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1873246 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1876324 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1888888 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1899646 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1901665 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1902842 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1923301 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1928384 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1931883 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1939723 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1939729 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1945841 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1948208 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1950691 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1952204 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1954787 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1956462 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1967176 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1967812 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1968475 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1971639 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1972309 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1972943 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1977757 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1977890 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1979707 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1980477 00:47:40.477 Removing: /var/run/dpdk/spdk_pid1980633 00:47:40.477 Clean 00:47:40.738 05:40:54 -- common/autotest_common.sh@1453 -- # return 0 00:47:40.738 05:40:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:40.738 05:40:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:40.738 05:40:54 -- common/autotest_common.sh@10 -- # set +x 00:47:40.738 05:40:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:40.738 05:40:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:40.738 05:40:54 -- common/autotest_common.sh@10 -- # set +x 00:47:40.738 05:40:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:40.738 05:40:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:40.738 05:40:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:40.738 05:40:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:40.738 05:40:54 -- spdk/autotest.sh@398 -- # hostname 00:47:40.738 05:40:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:40.999 geninfo: WARNING: invalid characters removed from testname! 00:48:07.599 05:41:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:07.860 05:41:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:10.404 05:41:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:11.452 05:41:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:13.358 05:41:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:14.740 05:41:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:16.123 05:41:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:16.123 05:41:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:48:16.123 05:41:30 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:48:16.123 05:41:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:16.123 05:41:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:16.123 05:41:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:16.384 + [[ -n 1204608 ]] 00:48:16.384 + sudo kill 1204608 00:48:16.395 [Pipeline] } 00:48:16.411 [Pipeline] // stage 00:48:16.416 [Pipeline] } 00:48:16.431 [Pipeline] // timeout 00:48:16.436 [Pipeline] } 00:48:16.451 [Pipeline] // catchError 00:48:16.456 [Pipeline] } 00:48:16.472 [Pipeline] // wrap 00:48:16.479 [Pipeline] } 00:48:16.493 [Pipeline] // catchError 00:48:16.502 [Pipeline] stage 00:48:16.505 [Pipeline] { (Epilogue) 00:48:16.519 [Pipeline] catchError 00:48:16.521 [Pipeline] { 00:48:16.535 [Pipeline] echo 00:48:16.538 Cleanup processes 00:48:16.546 [Pipeline] sh 00:48:16.838 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:16.838 1994840 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:16.853 [Pipeline] sh 00:48:17.142 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:17.142 ++ grep -v 'sudo pgrep' 00:48:17.142 ++ awk '{print $1}' 00:48:17.142 + sudo kill -9 00:48:17.142 + true 00:48:17.156 [Pipeline] sh 00:48:17.447 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:29.685 [Pipeline] sh 00:48:29.975 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:29.975 Artifacts sizes are good 00:48:29.988 [Pipeline] archiveArtifacts 00:48:29.994 Archiving artifacts 00:48:30.155 [Pipeline] sh 00:48:30.442 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:30.456 [Pipeline] cleanWs 00:48:30.465 [WS-CLEANUP] Deleting project workspace... 00:48:30.465 [WS-CLEANUP] Deferred wipeout is used... 00:48:30.472 [WS-CLEANUP] done 00:48:30.474 [Pipeline] } 00:48:30.489 [Pipeline] // catchError 00:48:30.498 [Pipeline] sh 00:48:30.846 + logger -p user.info -t JENKINS-CI 00:48:30.855 [Pipeline] } 00:48:30.868 [Pipeline] // stage 00:48:30.873 [Pipeline] } 00:48:30.887 [Pipeline] // node 00:48:30.892 [Pipeline] End of Pipeline 00:48:30.919 Finished: SUCCESS